00:00:00.000 Started by upstream project "autotest-nightly" build number 4285 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3648 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.085 > git --version # 'git version 2.39.2' 00:00:00.085 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.097 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.097 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.606 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.619 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.632 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.632 > git config core.sparsecheckout # timeout=10 00:00:02.643 > git read-tree -mu HEAD # timeout=10 00:00:02.659 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.677 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.677 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.762 [Pipeline] Start of Pipeline 00:00:02.775 [Pipeline] library 00:00:02.776 Loading library shm_lib@master 00:00:02.776 Library shm_lib@master is cached. Copying from home. 00:00:02.793 [Pipeline] node 00:00:02.812 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.814 [Pipeline] { 00:00:02.824 [Pipeline] catchError 00:00:02.826 [Pipeline] { 00:00:02.836 [Pipeline] wrap 00:00:02.843 [Pipeline] { 00:00:02.849 [Pipeline] stage 00:00:02.850 [Pipeline] { (Prologue) 00:00:02.862 [Pipeline] echo 00:00:02.863 Node: VM-host-WFP7 00:00:02.867 [Pipeline] cleanWs 00:00:02.875 [WS-CLEANUP] Deleting project workspace... 00:00:02.875 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.880 [WS-CLEANUP] done 00:00:03.051 [Pipeline] setCustomBuildProperty 00:00:03.138 [Pipeline] httpRequest 00:00:03.498 [Pipeline] echo 00:00:03.500 Sorcerer 10.211.164.20 is alive 00:00:03.510 [Pipeline] retry 00:00:03.512 [Pipeline] { 00:00:03.524 [Pipeline] httpRequest 00:00:03.528 HttpMethod: GET 00:00:03.529 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.530 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.530 Response Code: HTTP/1.1 200 OK 00:00:03.531 Success: Status code 200 is in the accepted range: 200,404 00:00:03.532 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.677 [Pipeline] } 00:00:03.692 [Pipeline] // retry 00:00:03.700 [Pipeline] sh 00:00:03.983 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.996 [Pipeline] httpRequest 00:00:04.347 [Pipeline] echo 00:00:04.348 Sorcerer 10.211.164.20 is alive 00:00:04.358 [Pipeline] retry 00:00:04.360 [Pipeline] { 00:00:04.376 [Pipeline] httpRequest 00:00:04.381 HttpMethod: GET 00:00:04.382 URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:04.383 Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:04.383 Response Code: HTTP/1.1 200 OK 00:00:04.384 Success: Status code 200 is in the accepted range: 200,404 00:00:04.385 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:22.603 [Pipeline] } 00:00:22.619 [Pipeline] // retry 00:00:22.625 [Pipeline] sh 00:00:22.905 + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:25.458 [Pipeline] sh 00:00:25.746 + git -C spdk log --oneline -n5 00:00:25.746 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:00:25.746 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:00:25.746 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:25.746 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:25.746 029355612 bdev_ut: add manual examine bdev unit test case 00:00:25.766 [Pipeline] writeFile 00:00:25.796 [Pipeline] sh 00:00:26.120 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:26.134 [Pipeline] sh 00:00:26.418 + cat autorun-spdk.conf 00:00:26.418 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.418 SPDK_RUN_ASAN=1 00:00:26.418 SPDK_RUN_UBSAN=1 00:00:26.418 SPDK_TEST_RAID=1 00:00:26.418 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.426 RUN_NIGHTLY=1 00:00:26.428 [Pipeline] } 00:00:26.442 [Pipeline] // stage 00:00:26.458 [Pipeline] stage 00:00:26.461 [Pipeline] { (Run VM) 00:00:26.474 [Pipeline] sh 00:00:26.758 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.758 + echo 'Start stage prepare_nvme.sh' 00:00:26.758 Start stage prepare_nvme.sh 00:00:26.758 + [[ -n 2 ]] 00:00:26.758 + disk_prefix=ex2 00:00:26.758 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:26.758 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:26.758 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:26.758 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.758 ++ SPDK_RUN_ASAN=1 00:00:26.758 ++ SPDK_RUN_UBSAN=1 00:00:26.758 ++ SPDK_TEST_RAID=1 00:00:26.758 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.758 ++ RUN_NIGHTLY=1 00:00:26.758 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:26.758 + nvme_files=() 00:00:26.758 + declare -A nvme_files 00:00:26.758 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.758 + nvme_files['nvme.img']=5G 00:00:26.758 + nvme_files['nvme-cmb.img']=5G 00:00:26.758 + nvme_files['nvme-multi0.img']=4G 00:00:26.758 + nvme_files['nvme-multi1.img']=4G 00:00:26.758 + nvme_files['nvme-multi2.img']=4G 00:00:26.758 + nvme_files['nvme-openstack.img']=8G 00:00:26.758 + nvme_files['nvme-zns.img']=5G 00:00:26.758 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.758 + (( SPDK_TEST_FTL == 1 )) 00:00:26.758 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.758 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.758 + for nvme in "${!nvme_files[@]}" 00:00:26.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:26.758 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.018 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:27.018 + echo 'End stage prepare_nvme.sh' 00:00:27.018 End stage prepare_nvme.sh 00:00:27.030 [Pipeline] sh 00:00:27.314 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:27.314 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:27.314 00:00:27.314 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:27.314 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:27.314 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:27.314 HELP=0 00:00:27.314 DRY_RUN=0 00:00:27.314 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:27.314 NVME_DISKS_TYPE=nvme,nvme, 00:00:27.314 NVME_AUTO_CREATE=0 00:00:27.314 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:27.314 NVME_CMB=,, 00:00:27.314 NVME_PMR=,, 00:00:27.314 NVME_ZNS=,, 00:00:27.314 NVME_MS=,, 00:00:27.314 NVME_FDP=,, 00:00:27.314 SPDK_VAGRANT_DISTRO=fedora39 00:00:27.314 SPDK_VAGRANT_VMCPU=10 00:00:27.314 SPDK_VAGRANT_VMRAM=12288 00:00:27.314 SPDK_VAGRANT_PROVIDER=libvirt 00:00:27.314 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:27.314 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:27.314 SPDK_OPENSTACK_NETWORK=0 00:00:27.314 VAGRANT_PACKAGE_BOX=0 00:00:27.314 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:27.314 FORCE_DISTRO=true 00:00:27.314 VAGRANT_BOX_VERSION= 00:00:27.314 EXTRA_VAGRANTFILES= 00:00:27.314 NIC_MODEL=virtio 00:00:27.314 00:00:27.314 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:27.314 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:29.262 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.831 ==> default: Creating image (snapshot of base box volume). 00:00:29.831 ==> default: Creating domain with the following settings... 00:00:29.831 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732072039_414c516c88bc1bf89fe3 00:00:29.831 ==> default: -- Domain type: kvm 00:00:29.831 ==> default: -- Cpus: 10 00:00:29.831 ==> default: -- Feature: acpi 00:00:29.831 ==> default: -- Feature: apic 00:00:29.831 ==> default: -- Feature: pae 00:00:29.831 ==> default: -- Memory: 12288M 00:00:29.831 ==> default: -- Memory Backing: hugepages: 00:00:29.831 ==> default: -- Management MAC: 00:00:29.831 ==> default: -- Loader: 00:00:29.831 ==> default: -- Nvram: 00:00:29.831 ==> default: -- Base box: spdk/fedora39 00:00:29.831 ==> default: -- Storage pool: default 00:00:29.831 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732072039_414c516c88bc1bf89fe3.img (20G) 00:00:29.831 ==> default: -- Volume Cache: default 00:00:29.831 ==> default: -- Kernel: 00:00:29.831 ==> default: -- Initrd: 00:00:29.831 ==> default: -- Graphics Type: vnc 00:00:29.831 ==> default: -- Graphics Port: -1 00:00:29.831 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.831 ==> default: -- Graphics Password: Not defined 00:00:29.831 ==> default: -- Video Type: cirrus 00:00:29.831 ==> default: -- Video VRAM: 9216 00:00:29.831 ==> default: -- Sound Type: 00:00:29.831 ==> default: -- Keymap: en-us 00:00:29.831 ==> default: -- TPM Path: 00:00:29.831 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.831 ==> default: -- Command line args: 00:00:29.831 ==> default: -> value=-device, 00:00:29.831 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.831 ==> default: -> value=-drive, 00:00:29.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.831 ==> default: -> value=-device, 00:00:29.831 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.831 ==> default: -> value=-device, 00:00:29.831 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.831 ==> default: -> value=-drive, 00:00:29.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.831 ==> default: -> value=-device, 00:00:29.831 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.831 ==> default: -> value=-drive, 00:00:29.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.831 ==> default: -> value=-device, 00:00:29.831 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.831 ==> default: -> value=-drive, 00:00:29.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.831 ==> default: -> value=-device, 00:00:29.832 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.091 ==> default: Creating shared folders metadata... 00:00:30.092 ==> default: Starting domain. 00:00:31.473 ==> default: Waiting for domain to get an IP address... 00:00:46.429 ==> default: Waiting for SSH to become available... 00:00:47.808 ==> default: Configuring and enabling network interfaces... 00:00:54.386 default: SSH address: 192.168.121.174:22 00:00:54.386 default: SSH username: vagrant 00:00:54.386 default: SSH auth method: private key 00:00:56.322 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:04.450 ==> default: Mounting SSHFS shared folder... 00:01:06.359 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:06.359 ==> default: Checking Mount.. 00:01:07.740 ==> default: Folder Successfully Mounted! 00:01:07.740 ==> default: Running provisioner: file... 00:01:09.119 default: ~/.gitconfig => .gitconfig 00:01:09.380 00:01:09.380 SUCCESS! 00:01:09.380 00:01:09.380 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:09.380 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:09.380 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:09.380 00:01:09.391 [Pipeline] } 00:01:09.406 [Pipeline] // stage 00:01:09.415 [Pipeline] dir 00:01:09.416 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:09.417 [Pipeline] { 00:01:09.430 [Pipeline] catchError 00:01:09.432 [Pipeline] { 00:01:09.444 [Pipeline] sh 00:01:09.726 + vagrant ssh-config --host vagrant 00:01:09.726 + sed -ne /^Host/,$p 00:01:09.726 + tee ssh_conf 00:01:12.264 Host vagrant 00:01:12.264 HostName 192.168.121.174 00:01:12.264 User vagrant 00:01:12.264 Port 22 00:01:12.264 UserKnownHostsFile /dev/null 00:01:12.264 StrictHostKeyChecking no 00:01:12.264 PasswordAuthentication no 00:01:12.264 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:12.264 IdentitiesOnly yes 00:01:12.264 LogLevel FATAL 00:01:12.264 ForwardAgent yes 00:01:12.264 ForwardX11 yes 00:01:12.264 00:01:12.278 [Pipeline] withEnv 00:01:12.280 [Pipeline] { 00:01:12.294 [Pipeline] sh 00:01:12.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:12.577 source /etc/os-release 00:01:12.577 [[ -e /image.version ]] && img=$(< /image.version) 00:01:12.577 # Minimal, systemd-like check. 00:01:12.577 if [[ -e /.dockerenv ]]; then 00:01:12.577 # Clear garbage from the node's name: 00:01:12.577 # agt-er_autotest_547-896 -> autotest_547-896 00:01:12.577 # $HOSTNAME is the actual container id 00:01:12.577 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:12.577 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:12.577 # We can assume this is a mount from a host where container is running, 00:01:12.577 # so fetch its hostname to easily identify the target swarm worker. 00:01:12.577 container="$(< /etc/hostname) ($agent)" 00:01:12.577 else 00:01:12.577 # Fallback 00:01:12.577 container=$agent 00:01:12.577 fi 00:01:12.577 fi 00:01:12.577 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:12.577 00:01:12.847 [Pipeline] } 00:01:12.864 [Pipeline] // withEnv 00:01:12.873 [Pipeline] setCustomBuildProperty 00:01:12.888 [Pipeline] stage 00:01:12.890 [Pipeline] { (Tests) 00:01:12.907 [Pipeline] sh 00:01:13.189 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:13.461 [Pipeline] sh 00:01:13.742 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:14.015 [Pipeline] timeout 00:01:14.016 Timeout set to expire in 1 hr 30 min 00:01:14.018 [Pipeline] { 00:01:14.032 [Pipeline] sh 00:01:14.312 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:14.934 HEAD is now at f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:01:14.945 [Pipeline] sh 00:01:15.224 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:15.494 [Pipeline] sh 00:01:15.772 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:16.047 [Pipeline] sh 00:01:16.326 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:16.585 ++ readlink -f spdk_repo 00:01:16.585 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:16.585 + [[ -n /home/vagrant/spdk_repo ]] 00:01:16.585 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:16.585 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:16.585 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:16.585 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:16.585 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:16.585 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:16.585 + cd /home/vagrant/spdk_repo 00:01:16.585 + source /etc/os-release 00:01:16.585 ++ NAME='Fedora Linux' 00:01:16.585 ++ VERSION='39 (Cloud Edition)' 00:01:16.585 ++ ID=fedora 00:01:16.585 ++ VERSION_ID=39 00:01:16.585 ++ VERSION_CODENAME= 00:01:16.585 ++ PLATFORM_ID=platform:f39 00:01:16.585 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.585 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.585 ++ LOGO=fedora-logo-icon 00:01:16.585 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.585 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.585 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.585 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.585 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.585 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.585 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.585 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.585 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.585 ++ SUPPORT_END=2024-11-12 00:01:16.585 ++ VARIANT='Cloud Edition' 00:01:16.585 ++ VARIANT_ID=cloud 00:01:16.585 + uname -a 00:01:16.585 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.585 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:17.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:17.151 Hugepages 00:01:17.151 node hugesize free / total 00:01:17.151 node0 1048576kB 0 / 0 00:01:17.151 node0 2048kB 0 / 0 00:01:17.151 00:01:17.151 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.151 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:17.151 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:17.151 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:17.151 + rm -f /tmp/spdk-ld-path 00:01:17.151 + source autorun-spdk.conf 00:01:17.151 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.151 ++ SPDK_RUN_ASAN=1 00:01:17.151 ++ SPDK_RUN_UBSAN=1 00:01:17.151 ++ SPDK_TEST_RAID=1 00:01:17.151 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.151 ++ RUN_NIGHTLY=1 00:01:17.151 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.151 + [[ -n '' ]] 00:01:17.152 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:17.152 + for M in /var/spdk/build-*-manifest.txt 00:01:17.152 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:17.152 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.152 + for M in /var/spdk/build-*-manifest.txt 00:01:17.152 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.152 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.152 + for M in /var/spdk/build-*-manifest.txt 00:01:17.152 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.152 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.152 ++ uname 00:01:17.152 + [[ Linux == \L\i\n\u\x ]] 00:01:17.152 + sudo dmesg -T 00:01:17.152 + sudo dmesg --clear 00:01:17.152 + dmesg_pid=5434 00:01:17.152 + sudo dmesg -Tw 00:01:17.152 + [[ Fedora Linux == FreeBSD ]] 00:01:17.152 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.152 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.152 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.152 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.152 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.152 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.152 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.152 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.152 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.152 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.152 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.152 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.152 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.152 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.152 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.409 03:08:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:17.409 03:08:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.409 03:08:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.409 03:08:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:17.409 03:08:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:17.409 03:08:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:17.409 03:08:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.409 03:08:06 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:17.409 03:08:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:17.409 03:08:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.409 03:08:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:17.409 03:08:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.409 03:08:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:17.409 03:08:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.409 03:08:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.409 03:08:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.409 03:08:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.409 03:08:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.409 03:08:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.409 03:08:06 -- paths/export.sh@5 -- $ export PATH 00:01:17.409 03:08:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.409 03:08:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:17.409 03:08:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:17.409 03:08:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732072086.XXXXXX 00:01:17.409 03:08:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732072086.KQv8YG 00:01:17.409 03:08:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:17.409 03:08:06 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:17.409 03:08:06 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.409 03:08:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.409 03:08:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.409 03:08:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:17.409 03:08:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:17.409 03:08:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.409 03:08:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:17.409 03:08:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:17.409 03:08:06 -- pm/common@17 -- $ local monitor 00:01:17.409 03:08:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.409 03:08:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.409 03:08:06 -- pm/common@25 -- $ sleep 1 00:01:17.409 03:08:06 -- pm/common@21 -- $ date +%s 00:01:17.409 03:08:06 -- pm/common@21 -- $ date +%s 00:01:17.409 03:08:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732072086 00:01:17.409 03:08:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732072086 00:01:17.409 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732072086_collect-vmstat.pm.log 00:01:17.409 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732072086_collect-cpu-load.pm.log 00:01:18.785 03:08:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:18.785 03:08:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.785 03:08:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.785 03:08:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:18.785 03:08:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.785 Wed Nov 20 03:08:07 AM UTC 2024 00:01:18.785 03:08:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.785 v25.01-pre-199-gf22e807f1 00:01:18.785 03:08:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:18.785 03:08:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:18.785 03:08:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:18.785 03:08:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:18.785 03:08:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.785 ************************************ 00:01:18.785 START TEST asan 00:01:18.785 ************************************ 00:01:18.785 using asan 00:01:18.785 03:08:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:18.785 00:01:18.785 real 0m0.000s 00:01:18.785 user 0m0.000s 00:01:18.785 sys 0m0.000s 00:01:18.785 03:08:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:18.785 03:08:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:18.785 ************************************ 00:01:18.785 END TEST asan 00:01:18.785 ************************************ 00:01:18.785 03:08:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.785 03:08:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.785 03:08:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:18.785 03:08:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:18.785 03:08:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.785 ************************************ 00:01:18.785 START TEST ubsan 00:01:18.785 ************************************ 00:01:18.785 using ubsan 00:01:18.785 03:08:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:18.785 00:01:18.785 real 0m0.000s 00:01:18.785 user 0m0.000s 00:01:18.785 sys 0m0.000s 00:01:18.785 03:08:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:18.785 03:08:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:18.785 ************************************ 00:01:18.785 END TEST ubsan 00:01:18.785 ************************************ 00:01:18.785 03:08:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:18.785 03:08:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:18.785 03:08:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:18.785 03:08:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:18.785 03:08:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:18.785 03:08:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:18.785 03:08:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:18.785 03:08:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:18.785 03:08:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:18.785 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.785 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:19.354 Using 'verbs' RDMA provider 00:01:35.238 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:50.128 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:50.128 Creating mk/config.mk...done. 00:01:50.128 Creating mk/cc.flags.mk...done. 00:01:50.128 Type 'make' to build. 00:01:50.128 03:08:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:50.128 03:08:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.128 03:08:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.128 03:08:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.128 ************************************ 00:01:50.128 START TEST make 00:01:50.128 ************************************ 00:01:50.128 03:08:39 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:50.128 make[1]: Nothing to be done for 'all'. 00:02:00.113 The Meson build system 00:02:00.113 Version: 1.5.0 00:02:00.113 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:00.113 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:00.113 Build type: native build 00:02:00.113 Program cat found: YES (/usr/bin/cat) 00:02:00.113 Project name: DPDK 00:02:00.113 Project version: 24.03.0 00:02:00.113 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:00.113 C linker for the host machine: cc ld.bfd 2.40-14 00:02:00.113 Host machine cpu family: x86_64 00:02:00.113 Host machine cpu: x86_64 00:02:00.113 Message: ## Building in Developer Mode ## 00:02:00.113 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.113 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.113 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.113 Program python3 found: YES (/usr/bin/python3) 00:02:00.113 Program cat found: YES (/usr/bin/cat) 00:02:00.113 Compiler for C supports arguments -march=native: YES 00:02:00.113 Checking for size of "void *" : 8 00:02:00.113 Checking for size of "void *" : 8 (cached) 00:02:00.113 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:00.113 Library m found: YES 00:02:00.113 Library numa found: YES 00:02:00.113 Has header "numaif.h" : YES 00:02:00.113 Library fdt found: NO 00:02:00.113 Library execinfo found: NO 00:02:00.113 Has header "execinfo.h" : YES 00:02:00.113 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:00.113 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.113 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.113 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.113 Run-time dependency openssl found: YES 3.1.1 00:02:00.113 Run-time dependency libpcap found: YES 1.10.4 00:02:00.113 Has header "pcap.h" with dependency libpcap: YES 00:02:00.113 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.113 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.113 Compiler for C supports arguments -Wformat: YES 00:02:00.113 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.113 Compiler for C supports arguments -Wformat-security: NO 00:02:00.113 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.113 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.113 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.113 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.113 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.113 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.113 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.113 Compiler for C supports arguments -Wundef: YES 00:02:00.113 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.113 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.113 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.113 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.113 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.113 Program objdump found: YES (/usr/bin/objdump) 00:02:00.113 Compiler for C supports arguments -mavx512f: YES 00:02:00.113 Checking if "AVX512 checking" compiles: YES 00:02:00.113 Fetching value of define "__SSE4_2__" : 1 00:02:00.113 Fetching value of define "__AES__" : 1 00:02:00.113 Fetching value of define "__AVX__" : 1 00:02:00.113 Fetching value of define "__AVX2__" : 1 00:02:00.113 Fetching value of define "__AVX512BW__" : 1 00:02:00.113 Fetching value of define "__AVX512CD__" : 1 00:02:00.113 Fetching value of define "__AVX512DQ__" : 1 00:02:00.113 Fetching value of define "__AVX512F__" : 1 00:02:00.113 Fetching value of define "__AVX512VL__" : 1 00:02:00.114 Fetching value of define "__PCLMUL__" : 1 00:02:00.114 Fetching value of define "__RDRND__" : 1 00:02:00.114 Fetching value of define "__RDSEED__" : 1 00:02:00.114 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.114 Fetching value of define "__znver1__" : (undefined) 00:02:00.114 Fetching value of define "__znver2__" : (undefined) 00:02:00.114 Fetching value of define "__znver3__" : (undefined) 00:02:00.114 Fetching value of define "__znver4__" : (undefined) 00:02:00.114 Library asan found: YES 00:02:00.114 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.114 Message: lib/log: Defining dependency "log" 00:02:00.114 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.114 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.114 Library rt found: YES 00:02:00.114 Checking for function "getentropy" : NO 00:02:00.114 Message: lib/eal: Defining dependency "eal" 00:02:00.114 Message: lib/ring: Defining dependency "ring" 00:02:00.114 Message: lib/rcu: Defining dependency "rcu" 00:02:00.114 Message: lib/mempool: Defining dependency "mempool" 00:02:00.114 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.114 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.114 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.114 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.114 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.114 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.114 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:00.114 Compiler for C supports arguments -mpclmul: YES 00:02:00.114 Compiler for C supports arguments -maes: YES 00:02:00.114 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.114 Compiler for C supports arguments -mavx512bw: YES 00:02:00.114 Compiler for C supports arguments -mavx512dq: YES 00:02:00.114 Compiler for C supports arguments -mavx512vl: YES 00:02:00.114 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.114 Compiler for C supports arguments -mavx2: YES 00:02:00.114 Compiler for C supports arguments -mavx: YES 00:02:00.114 Message: lib/net: Defining dependency "net" 00:02:00.114 Message: lib/meter: Defining dependency "meter" 00:02:00.114 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.114 Message: lib/pci: Defining dependency "pci" 00:02:00.114 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.114 Message: lib/hash: Defining dependency "hash" 00:02:00.114 Message: lib/timer: Defining dependency "timer" 00:02:00.114 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.114 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.114 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.114 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.114 Message: lib/power: Defining dependency "power" 00:02:00.114 Message: lib/reorder: Defining dependency "reorder" 00:02:00.114 Message: lib/security: Defining dependency "security" 00:02:00.114 Has header "linux/userfaultfd.h" : YES 00:02:00.114 Has header "linux/vduse.h" : YES 00:02:00.114 Message: lib/vhost: Defining dependency "vhost" 00:02:00.114 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.114 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.114 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.114 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.114 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.114 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.114 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.114 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.114 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.114 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.114 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:00.114 Configuring doxy-api-html.conf using configuration 00:02:00.114 Configuring doxy-api-man.conf using configuration 00:02:00.114 Program mandb found: YES (/usr/bin/mandb) 00:02:00.114 Program sphinx-build found: NO 00:02:00.114 Configuring rte_build_config.h using configuration 00:02:00.114 Message: 00:02:00.114 ================= 00:02:00.114 Applications Enabled 00:02:00.114 ================= 00:02:00.114 00:02:00.114 apps: 00:02:00.114 00:02:00.114 00:02:00.114 Message: 00:02:00.114 ================= 00:02:00.114 Libraries Enabled 00:02:00.114 ================= 00:02:00.114 00:02:00.114 libs: 00:02:00.114 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.114 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.114 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.114 00:02:00.114 Message: 00:02:00.114 =============== 00:02:00.114 Drivers Enabled 00:02:00.114 =============== 00:02:00.114 00:02:00.114 common: 00:02:00.114 00:02:00.114 bus: 00:02:00.114 pci, vdev, 00:02:00.114 mempool: 00:02:00.114 ring, 00:02:00.114 dma: 00:02:00.114 00:02:00.114 net: 00:02:00.114 00:02:00.114 crypto: 00:02:00.114 00:02:00.114 compress: 00:02:00.114 00:02:00.114 vdpa: 00:02:00.114 00:02:00.114 00:02:00.114 Message: 00:02:00.114 ================= 00:02:00.114 Content Skipped 00:02:00.114 ================= 00:02:00.114 00:02:00.114 apps: 00:02:00.114 dumpcap: explicitly disabled via build config 00:02:00.114 graph: explicitly disabled via build config 00:02:00.114 pdump: explicitly disabled via build config 00:02:00.114 proc-info: explicitly disabled via build config 00:02:00.114 test-acl: explicitly disabled via build config 00:02:00.114 test-bbdev: explicitly disabled via build config 00:02:00.114 test-cmdline: explicitly disabled via build config 00:02:00.114 test-compress-perf: explicitly disabled via build config 00:02:00.114 test-crypto-perf: explicitly disabled via build config 00:02:00.114 test-dma-perf: explicitly disabled via build config 00:02:00.114 test-eventdev: explicitly disabled via build config 00:02:00.114 test-fib: explicitly disabled via build config 00:02:00.114 test-flow-perf: explicitly disabled via build config 00:02:00.114 test-gpudev: explicitly disabled via build config 00:02:00.114 test-mldev: explicitly disabled via build config 00:02:00.114 test-pipeline: explicitly disabled via build config 00:02:00.114 test-pmd: explicitly disabled via build config 00:02:00.114 test-regex: explicitly disabled via build config 00:02:00.114 test-sad: explicitly disabled via build config 00:02:00.114 test-security-perf: explicitly disabled via build config 00:02:00.114 00:02:00.114 libs: 00:02:00.114 argparse: explicitly disabled via build config 00:02:00.114 metrics: explicitly disabled via build config 00:02:00.114 acl: explicitly disabled via build config 00:02:00.114 bbdev: explicitly disabled via build config 00:02:00.114 bitratestats: explicitly disabled via build config 00:02:00.114 bpf: explicitly disabled via build config 00:02:00.114 cfgfile: explicitly disabled via build config 00:02:00.114 distributor: explicitly disabled via build config 00:02:00.114 efd: explicitly disabled via build config 00:02:00.114 eventdev: explicitly disabled via build config 00:02:00.114 dispatcher: explicitly disabled via build config 00:02:00.114 gpudev: explicitly disabled via build config 00:02:00.114 gro: explicitly disabled via build config 00:02:00.114 gso: explicitly disabled via build config 00:02:00.114 ip_frag: explicitly disabled via build config 00:02:00.114 jobstats: explicitly disabled via build config 00:02:00.114 latencystats: explicitly disabled via build config 00:02:00.114 lpm: explicitly disabled via build config 00:02:00.114 member: explicitly disabled via build config 00:02:00.114 pcapng: explicitly disabled via build config 00:02:00.114 rawdev: explicitly disabled via build config 00:02:00.114 regexdev: explicitly disabled via build config 00:02:00.114 mldev: explicitly disabled via build config 00:02:00.114 rib: explicitly disabled via build config 00:02:00.114 sched: explicitly disabled via build config 00:02:00.114 stack: explicitly disabled via build config 00:02:00.114 ipsec: explicitly disabled via build config 00:02:00.114 pdcp: explicitly disabled via build config 00:02:00.114 fib: explicitly disabled via build config 00:02:00.115 port: explicitly disabled via build config 00:02:00.115 pdump: explicitly disabled via build config 00:02:00.115 table: explicitly disabled via build config 00:02:00.115 pipeline: explicitly disabled via build config 00:02:00.115 graph: explicitly disabled via build config 00:02:00.115 node: explicitly disabled via build config 00:02:00.115 00:02:00.115 drivers: 00:02:00.115 common/cpt: not in enabled drivers build config 00:02:00.115 common/dpaax: not in enabled drivers build config 00:02:00.115 common/iavf: not in enabled drivers build config 00:02:00.115 common/idpf: not in enabled drivers build config 00:02:00.115 common/ionic: not in enabled drivers build config 00:02:00.115 common/mvep: not in enabled drivers build config 00:02:00.115 common/octeontx: not in enabled drivers build config 00:02:00.115 bus/auxiliary: not in enabled drivers build config 00:02:00.115 bus/cdx: not in enabled drivers build config 00:02:00.115 bus/dpaa: not in enabled drivers build config 00:02:00.115 bus/fslmc: not in enabled drivers build config 00:02:00.115 bus/ifpga: not in enabled drivers build config 00:02:00.115 bus/platform: not in enabled drivers build config 00:02:00.115 bus/uacce: not in enabled drivers build config 00:02:00.115 bus/vmbus: not in enabled drivers build config 00:02:00.115 common/cnxk: not in enabled drivers build config 00:02:00.115 common/mlx5: not in enabled drivers build config 00:02:00.115 common/nfp: not in enabled drivers build config 00:02:00.115 common/nitrox: not in enabled drivers build config 00:02:00.115 common/qat: not in enabled drivers build config 00:02:00.115 common/sfc_efx: not in enabled drivers build config 00:02:00.115 mempool/bucket: not in enabled drivers build config 00:02:00.115 mempool/cnxk: not in enabled drivers build config 00:02:00.115 mempool/dpaa: not in enabled drivers build config 00:02:00.115 mempool/dpaa2: not in enabled drivers build config 00:02:00.115 mempool/octeontx: not in enabled drivers build config 00:02:00.115 mempool/stack: not in enabled drivers build config 00:02:00.115 dma/cnxk: not in enabled drivers build config 00:02:00.115 dma/dpaa: not in enabled drivers build config 00:02:00.115 dma/dpaa2: not in enabled drivers build config 00:02:00.115 dma/hisilicon: not in enabled drivers build config 00:02:00.115 dma/idxd: not in enabled drivers build config 00:02:00.115 dma/ioat: not in enabled drivers build config 00:02:00.115 dma/skeleton: not in enabled drivers build config 00:02:00.115 net/af_packet: not in enabled drivers build config 00:02:00.115 net/af_xdp: not in enabled drivers build config 00:02:00.115 net/ark: not in enabled drivers build config 00:02:00.115 net/atlantic: not in enabled drivers build config 00:02:00.115 net/avp: not in enabled drivers build config 00:02:00.115 net/axgbe: not in enabled drivers build config 00:02:00.115 net/bnx2x: not in enabled drivers build config 00:02:00.115 net/bnxt: not in enabled drivers build config 00:02:00.115 net/bonding: not in enabled drivers build config 00:02:00.115 net/cnxk: not in enabled drivers build config 00:02:00.115 net/cpfl: not in enabled drivers build config 00:02:00.115 net/cxgbe: not in enabled drivers build config 00:02:00.115 net/dpaa: not in enabled drivers build config 00:02:00.115 net/dpaa2: not in enabled drivers build config 00:02:00.115 net/e1000: not in enabled drivers build config 00:02:00.115 net/ena: not in enabled drivers build config 00:02:00.115 net/enetc: not in enabled drivers build config 00:02:00.115 net/enetfec: not in enabled drivers build config 00:02:00.115 net/enic: not in enabled drivers build config 00:02:00.115 net/failsafe: not in enabled drivers build config 00:02:00.115 net/fm10k: not in enabled drivers build config 00:02:00.115 net/gve: not in enabled drivers build config 00:02:00.115 net/hinic: not in enabled drivers build config 00:02:00.115 net/hns3: not in enabled drivers build config 00:02:00.115 net/i40e: not in enabled drivers build config 00:02:00.115 net/iavf: not in enabled drivers build config 00:02:00.115 net/ice: not in enabled drivers build config 00:02:00.115 net/idpf: not in enabled drivers build config 00:02:00.115 net/igc: not in enabled drivers build config 00:02:00.115 net/ionic: not in enabled drivers build config 00:02:00.115 net/ipn3ke: not in enabled drivers build config 00:02:00.115 net/ixgbe: not in enabled drivers build config 00:02:00.115 net/mana: not in enabled drivers build config 00:02:00.115 net/memif: not in enabled drivers build config 00:02:00.115 net/mlx4: not in enabled drivers build config 00:02:00.115 net/mlx5: not in enabled drivers build config 00:02:00.115 net/mvneta: not in enabled drivers build config 00:02:00.115 net/mvpp2: not in enabled drivers build config 00:02:00.115 net/netvsc: not in enabled drivers build config 00:02:00.115 net/nfb: not in enabled drivers build config 00:02:00.115 net/nfp: not in enabled drivers build config 00:02:00.115 net/ngbe: not in enabled drivers build config 00:02:00.115 net/null: not in enabled drivers build config 00:02:00.115 net/octeontx: not in enabled drivers build config 00:02:00.115 net/octeon_ep: not in enabled drivers build config 00:02:00.115 net/pcap: not in enabled drivers build config 00:02:00.115 net/pfe: not in enabled drivers build config 00:02:00.115 net/qede: not in enabled drivers build config 00:02:00.115 net/ring: not in enabled drivers build config 00:02:00.115 net/sfc: not in enabled drivers build config 00:02:00.115 net/softnic: not in enabled drivers build config 00:02:00.115 net/tap: not in enabled drivers build config 00:02:00.115 net/thunderx: not in enabled drivers build config 00:02:00.115 net/txgbe: not in enabled drivers build config 00:02:00.115 net/vdev_netvsc: not in enabled drivers build config 00:02:00.115 net/vhost: not in enabled drivers build config 00:02:00.115 net/virtio: not in enabled drivers build config 00:02:00.115 net/vmxnet3: not in enabled drivers build config 00:02:00.115 raw/*: missing internal dependency, "rawdev" 00:02:00.115 crypto/armv8: not in enabled drivers build config 00:02:00.115 crypto/bcmfs: not in enabled drivers build config 00:02:00.115 crypto/caam_jr: not in enabled drivers build config 00:02:00.115 crypto/ccp: not in enabled drivers build config 00:02:00.115 crypto/cnxk: not in enabled drivers build config 00:02:00.115 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.115 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.115 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.115 crypto/mlx5: not in enabled drivers build config 00:02:00.115 crypto/mvsam: not in enabled drivers build config 00:02:00.115 crypto/nitrox: not in enabled drivers build config 00:02:00.115 crypto/null: not in enabled drivers build config 00:02:00.115 crypto/octeontx: not in enabled drivers build config 00:02:00.115 crypto/openssl: not in enabled drivers build config 00:02:00.115 crypto/scheduler: not in enabled drivers build config 00:02:00.115 crypto/uadk: not in enabled drivers build config 00:02:00.115 crypto/virtio: not in enabled drivers build config 00:02:00.115 compress/isal: not in enabled drivers build config 00:02:00.115 compress/mlx5: not in enabled drivers build config 00:02:00.115 compress/nitrox: not in enabled drivers build config 00:02:00.115 compress/octeontx: not in enabled drivers build config 00:02:00.115 compress/zlib: not in enabled drivers build config 00:02:00.115 regex/*: missing internal dependency, "regexdev" 00:02:00.115 ml/*: missing internal dependency, "mldev" 00:02:00.115 vdpa/ifc: not in enabled drivers build config 00:02:00.115 vdpa/mlx5: not in enabled drivers build config 00:02:00.115 vdpa/nfp: not in enabled drivers build config 00:02:00.115 vdpa/sfc: not in enabled drivers build config 00:02:00.115 event/*: missing internal dependency, "eventdev" 00:02:00.116 baseband/*: missing internal dependency, "bbdev" 00:02:00.116 gpu/*: missing internal dependency, "gpudev" 00:02:00.116 00:02:00.116 00:02:00.116 Build targets in project: 85 00:02:00.116 00:02:00.116 DPDK 24.03.0 00:02:00.116 00:02:00.116 User defined options 00:02:00.116 buildtype : debug 00:02:00.116 default_library : shared 00:02:00.116 libdir : lib 00:02:00.116 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.116 b_sanitize : address 00:02:00.116 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.116 c_link_args : 00:02:00.116 cpu_instruction_set: native 00:02:00.116 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:00.116 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:00.116 enable_docs : false 00:02:00.116 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:00.116 enable_kmods : false 00:02:00.116 max_lcores : 128 00:02:00.116 tests : false 00:02:00.116 00:02:00.116 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.376 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.376 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.636 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.636 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.636 [4/268] Linking static target lib/librte_kvargs.a 00:02:00.636 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.636 [6/268] Linking static target lib/librte_log.a 00:02:00.896 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.896 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.896 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.896 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.896 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.896 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.896 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.896 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.155 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.155 [16/268] Linking static target lib/librte_telemetry.a 00:02:01.155 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.155 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.416 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.416 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.675 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.675 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.675 [23/268] Linking target lib/librte_log.so.24.1 00:02:01.675 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.675 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.675 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.675 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.675 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.935 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.935 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.935 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.935 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.935 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.935 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:02.195 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.195 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.195 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.195 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.195 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.195 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.195 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.195 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.195 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.455 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.455 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.455 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.455 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.455 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.715 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.715 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.715 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.975 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.975 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.975 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.975 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.975 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.975 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.975 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.235 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.235 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.235 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.494 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.494 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.494 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.494 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.494 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.494 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.754 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.754 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.754 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.017 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.017 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.017 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.017 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.017 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.017 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.017 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.277 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.277 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.277 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.277 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.537 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.537 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.537 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.796 [85/268] Linking static target lib/librte_eal.a 00:02:04.796 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.796 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.796 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.796 [89/268] Linking static target lib/librte_ring.a 00:02:04.796 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.796 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.796 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.796 [93/268] Linking static target lib/librte_mempool.a 00:02:05.057 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.057 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.057 [96/268] Linking static target lib/librte_rcu.a 00:02:05.057 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.057 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:05.057 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:05.316 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.316 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:05.316 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.575 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.576 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.576 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.576 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.576 [107/268] Linking static target lib/librte_meter.a 00:02:05.576 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.576 [109/268] Linking static target lib/librte_mbuf.a 00:02:05.576 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.576 [111/268] Linking static target lib/librte_net.a 00:02:05.835 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.835 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.835 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.835 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.095 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.095 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.095 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.355 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:06.355 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.614 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.614 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:06.874 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.874 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.874 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.874 [126/268] Linking static target lib/librte_pci.a 00:02:07.134 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.134 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.134 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.134 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.134 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.134 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.134 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.393 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.393 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.393 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.393 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.393 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.393 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.393 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.393 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.393 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.393 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:07.393 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.393 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.652 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.652 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.652 [148/268] Linking static target lib/librte_cmdline.a 00:02:07.912 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.912 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.912 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.912 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.172 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.172 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:08.172 [155/268] Linking static target lib/librte_timer.a 00:02:08.172 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.432 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.432 [158/268] Linking static target lib/librte_ethdev.a 00:02:08.432 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.432 [160/268] Linking static target lib/librte_compressdev.a 00:02:08.432 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.693 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.693 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.693 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.693 [165/268] Linking static target lib/librte_dmadev.a 00:02:08.693 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.693 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.953 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.953 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.953 [170/268] Linking static target lib/librte_hash.a 00:02:08.953 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.213 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.213 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.213 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.213 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.473 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.473 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.473 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.473 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.473 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.732 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.732 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.732 [183/268] Linking static target lib/librte_cryptodev.a 00:02:09.732 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.732 [185/268] Linking static target lib/librte_power.a 00:02:09.991 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.991 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.991 [188/268] Linking static target lib/librte_reorder.a 00:02:09.991 [189/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.253 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.253 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.253 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.253 [193/268] Linking static target lib/librte_security.a 00:02:10.513 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.771 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.029 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.029 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:11.029 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.029 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.289 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.289 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.289 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.549 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.549 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.549 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:11.808 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.808 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.808 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.808 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.067 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.067 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.067 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.067 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.067 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.067 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:12.327 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.327 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.327 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.327 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.327 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.327 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:12.327 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.327 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.586 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.586 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.586 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:12.586 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.524 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.462 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.462 [230/268] Linking target lib/librte_eal.so.24.1 00:02:14.721 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:14.721 [232/268] Linking target lib/librte_pci.so.24.1 00:02:14.721 [233/268] Linking target lib/librte_meter.so.24.1 00:02:14.721 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:14.721 [235/268] Linking target lib/librte_timer.so.24.1 00:02:14.721 [236/268] Linking target lib/librte_ring.so.24.1 00:02:14.721 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.721 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.721 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.721 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.721 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.721 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.721 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.980 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:14.980 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:14.980 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.980 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.980 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:14.980 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:15.240 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:15.240 [251/268] Linking target lib/librte_net.so.24.1 00:02:15.240 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:15.240 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:15.240 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:15.240 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:15.240 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:15.498 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:15.498 [258/268] Linking target lib/librte_security.so.24.1 00:02:15.498 [259/268] Linking target lib/librte_hash.so.24.1 00:02:15.498 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:16.434 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.435 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:16.694 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:16.695 [264/268] Linking target lib/librte_power.so.24.1 00:02:17.265 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.265 [266/268] Linking static target lib/librte_vhost.a 00:02:19.800 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.800 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:19.800 INFO: autodetecting backend as ninja 00:02:19.800 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:37.888 CC lib/ut/ut.o 00:02:37.888 CC lib/log/log_flags.o 00:02:37.888 CC lib/log/log.o 00:02:37.888 CC lib/log/log_deprecated.o 00:02:37.888 CC lib/ut_mock/mock.o 00:02:37.888 LIB libspdk_ut.a 00:02:37.888 LIB libspdk_log.a 00:02:37.888 SO libspdk_ut.so.2.0 00:02:37.888 SO libspdk_log.so.7.1 00:02:37.888 LIB libspdk_ut_mock.a 00:02:37.888 SYMLINK libspdk_ut.so 00:02:37.888 SO libspdk_ut_mock.so.6.0 00:02:37.888 SYMLINK libspdk_log.so 00:02:37.888 SYMLINK libspdk_ut_mock.so 00:02:37.888 CC lib/util/base64.o 00:02:37.888 CC lib/util/bit_array.o 00:02:37.888 CC lib/util/cpuset.o 00:02:37.888 CC lib/util/crc16.o 00:02:37.888 CC lib/util/crc32.o 00:02:37.888 CC lib/ioat/ioat.o 00:02:37.888 CC lib/util/crc32c.o 00:02:37.888 CXX lib/trace_parser/trace.o 00:02:37.888 CC lib/dma/dma.o 00:02:37.888 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.888 CC lib/util/crc32_ieee.o 00:02:37.888 CC lib/util/crc64.o 00:02:37.888 CC lib/util/dif.o 00:02:37.888 CC lib/util/fd.o 00:02:37.888 CC lib/util/fd_group.o 00:02:37.888 CC lib/util/file.o 00:02:37.888 CC lib/util/hexlify.o 00:02:37.888 CC lib/util/iov.o 00:02:37.888 LIB libspdk_dma.a 00:02:37.888 SO libspdk_dma.so.5.0 00:02:37.888 CC lib/vfio_user/host/vfio_user.o 00:02:37.888 LIB libspdk_ioat.a 00:02:37.888 SO libspdk_ioat.so.7.0 00:02:37.888 SYMLINK libspdk_dma.so 00:02:37.888 CC lib/util/math.o 00:02:37.888 CC lib/util/net.o 00:02:37.888 CC lib/util/pipe.o 00:02:37.888 CC lib/util/strerror_tls.o 00:02:37.888 SYMLINK libspdk_ioat.so 00:02:37.888 CC lib/util/string.o 00:02:37.888 CC lib/util/uuid.o 00:02:37.888 CC lib/util/xor.o 00:02:37.888 CC lib/util/zipf.o 00:02:37.888 CC lib/util/md5.o 00:02:37.888 LIB libspdk_vfio_user.a 00:02:37.888 SO libspdk_vfio_user.so.5.0 00:02:37.888 SYMLINK libspdk_vfio_user.so 00:02:37.888 LIB libspdk_util.a 00:02:37.888 SO libspdk_util.so.10.1 00:02:37.888 LIB libspdk_trace_parser.a 00:02:37.888 SYMLINK libspdk_util.so 00:02:37.888 SO libspdk_trace_parser.so.6.0 00:02:37.888 SYMLINK libspdk_trace_parser.so 00:02:37.888 CC lib/idxd/idxd.o 00:02:37.888 CC lib/idxd/idxd_user.o 00:02:37.888 CC lib/idxd/idxd_kernel.o 00:02:37.888 CC lib/json/json_parse.o 00:02:37.888 CC lib/json/json_util.o 00:02:37.889 CC lib/json/json_write.o 00:02:37.889 CC lib/rdma_utils/rdma_utils.o 00:02:37.889 CC lib/vmd/vmd.o 00:02:37.889 CC lib/conf/conf.o 00:02:37.889 CC lib/env_dpdk/env.o 00:02:37.889 CC lib/env_dpdk/memory.o 00:02:37.889 CC lib/env_dpdk/pci.o 00:02:37.889 CC lib/env_dpdk/init.o 00:02:37.889 LIB libspdk_conf.a 00:02:37.889 CC lib/env_dpdk/threads.o 00:02:37.889 SO libspdk_conf.so.6.0 00:02:37.889 LIB libspdk_rdma_utils.a 00:02:37.889 LIB libspdk_json.a 00:02:37.889 SO libspdk_rdma_utils.so.1.0 00:02:37.889 SYMLINK libspdk_conf.so 00:02:37.889 SO libspdk_json.so.6.0 00:02:37.889 CC lib/env_dpdk/pci_ioat.o 00:02:37.889 SYMLINK libspdk_rdma_utils.so 00:02:37.889 CC lib/env_dpdk/pci_virtio.o 00:02:37.889 SYMLINK libspdk_json.so 00:02:37.889 CC lib/env_dpdk/pci_vmd.o 00:02:38.148 CC lib/env_dpdk/pci_idxd.o 00:02:38.148 CC lib/env_dpdk/pci_event.o 00:02:38.148 CC lib/rdma_provider/common.o 00:02:38.148 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.148 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.148 CC lib/vmd/led.o 00:02:38.148 CC lib/env_dpdk/sigbus_handler.o 00:02:38.148 CC lib/env_dpdk/pci_dpdk.o 00:02:38.148 LIB libspdk_idxd.a 00:02:38.148 SO libspdk_idxd.so.12.1 00:02:38.148 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.408 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.408 LIB libspdk_vmd.a 00:02:38.408 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.408 SYMLINK libspdk_idxd.so 00:02:38.408 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:38.408 SO libspdk_vmd.so.6.0 00:02:38.408 LIB libspdk_rdma_provider.a 00:02:38.408 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:38.408 SO libspdk_rdma_provider.so.7.0 00:02:38.408 SYMLINK libspdk_vmd.so 00:02:38.408 SYMLINK libspdk_rdma_provider.so 00:02:38.667 LIB libspdk_jsonrpc.a 00:02:38.667 SO libspdk_jsonrpc.so.6.0 00:02:38.667 SYMLINK libspdk_jsonrpc.so 00:02:38.926 CC lib/rpc/rpc.o 00:02:39.185 LIB libspdk_env_dpdk.a 00:02:39.185 LIB libspdk_rpc.a 00:02:39.185 SO libspdk_env_dpdk.so.15.1 00:02:39.185 SO libspdk_rpc.so.6.0 00:02:39.444 SYMLINK libspdk_rpc.so 00:02:39.444 SYMLINK libspdk_env_dpdk.so 00:02:39.706 CC lib/notify/notify.o 00:02:39.706 CC lib/notify/notify_rpc.o 00:02:39.706 CC lib/trace/trace.o 00:02:39.706 CC lib/trace/trace_flags.o 00:02:39.706 CC lib/trace/trace_rpc.o 00:02:39.706 CC lib/keyring/keyring_rpc.o 00:02:39.706 CC lib/keyring/keyring.o 00:02:39.964 LIB libspdk_notify.a 00:02:39.964 SO libspdk_notify.so.6.0 00:02:39.964 SYMLINK libspdk_notify.so 00:02:39.964 LIB libspdk_keyring.a 00:02:39.964 LIB libspdk_trace.a 00:02:39.964 SO libspdk_keyring.so.2.0 00:02:40.225 SO libspdk_trace.so.11.0 00:02:40.225 SYMLINK libspdk_keyring.so 00:02:40.225 SYMLINK libspdk_trace.so 00:02:40.485 CC lib/thread/thread.o 00:02:40.485 CC lib/thread/iobuf.o 00:02:40.485 CC lib/sock/sock.o 00:02:40.485 CC lib/sock/sock_rpc.o 00:02:41.055 LIB libspdk_sock.a 00:02:41.055 SO libspdk_sock.so.10.0 00:02:41.055 SYMLINK libspdk_sock.so 00:02:41.625 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:41.625 CC lib/nvme/nvme_ctrlr.o 00:02:41.625 CC lib/nvme/nvme_ns_cmd.o 00:02:41.625 CC lib/nvme/nvme_fabric.o 00:02:41.625 CC lib/nvme/nvme_ns.o 00:02:41.625 CC lib/nvme/nvme_pcie_common.o 00:02:41.626 CC lib/nvme/nvme_pcie.o 00:02:41.626 CC lib/nvme/nvme_qpair.o 00:02:41.626 CC lib/nvme/nvme.o 00:02:42.231 LIB libspdk_thread.a 00:02:42.231 CC lib/nvme/nvme_quirks.o 00:02:42.231 SO libspdk_thread.so.11.0 00:02:42.231 CC lib/nvme/nvme_transport.o 00:02:42.231 CC lib/nvme/nvme_discovery.o 00:02:42.231 SYMLINK libspdk_thread.so 00:02:42.231 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.231 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.231 CC lib/accel/accel.o 00:02:42.231 CC lib/accel/accel_rpc.o 00:02:42.492 CC lib/nvme/nvme_tcp.o 00:02:42.492 CC lib/accel/accel_sw.o 00:02:42.492 CC lib/nvme/nvme_opal.o 00:02:42.492 CC lib/nvme/nvme_io_msg.o 00:02:42.752 CC lib/nvme/nvme_poll_group.o 00:02:42.752 CC lib/nvme/nvme_zns.o 00:02:42.752 CC lib/nvme/nvme_stubs.o 00:02:42.752 CC lib/nvme/nvme_auth.o 00:02:43.012 CC lib/blob/blobstore.o 00:02:43.272 CC lib/nvme/nvme_cuse.o 00:02:43.272 CC lib/nvme/nvme_rdma.o 00:02:43.272 CC lib/init/json_config.o 00:02:43.533 CC lib/virtio/virtio.o 00:02:43.533 CC lib/fsdev/fsdev.o 00:02:43.533 LIB libspdk_accel.a 00:02:43.533 SO libspdk_accel.so.16.0 00:02:43.533 CC lib/init/subsystem.o 00:02:43.793 SYMLINK libspdk_accel.so 00:02:43.793 CC lib/init/subsystem_rpc.o 00:02:43.793 CC lib/virtio/virtio_vhost_user.o 00:02:43.793 CC lib/init/rpc.o 00:02:43.793 CC lib/blob/request.o 00:02:43.793 CC lib/blob/zeroes.o 00:02:44.053 CC lib/bdev/bdev.o 00:02:44.053 LIB libspdk_init.a 00:02:44.053 SO libspdk_init.so.6.0 00:02:44.053 CC lib/blob/blob_bs_dev.o 00:02:44.053 CC lib/fsdev/fsdev_io.o 00:02:44.053 CC lib/bdev/bdev_rpc.o 00:02:44.053 SYMLINK libspdk_init.so 00:02:44.053 CC lib/virtio/virtio_vfio_user.o 00:02:44.054 CC lib/fsdev/fsdev_rpc.o 00:02:44.054 CC lib/bdev/bdev_zone.o 00:02:44.054 CC lib/event/app.o 00:02:44.427 CC lib/event/reactor.o 00:02:44.427 CC lib/bdev/part.o 00:02:44.427 CC lib/event/log_rpc.o 00:02:44.427 CC lib/bdev/scsi_nvme.o 00:02:44.427 CC lib/virtio/virtio_pci.o 00:02:44.427 LIB libspdk_fsdev.a 00:02:44.427 SO libspdk_fsdev.so.2.0 00:02:44.427 CC lib/event/app_rpc.o 00:02:44.687 SYMLINK libspdk_fsdev.so 00:02:44.687 CC lib/event/scheduler_static.o 00:02:44.687 LIB libspdk_nvme.a 00:02:44.687 LIB libspdk_virtio.a 00:02:44.687 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.687 SO libspdk_virtio.so.7.0 00:02:44.947 LIB libspdk_event.a 00:02:44.947 SO libspdk_nvme.so.15.0 00:02:44.947 SYMLINK libspdk_virtio.so 00:02:44.947 SO libspdk_event.so.14.0 00:02:44.947 SYMLINK libspdk_event.so 00:02:45.207 SYMLINK libspdk_nvme.so 00:02:45.467 LIB libspdk_fuse_dispatcher.a 00:02:45.467 SO libspdk_fuse_dispatcher.so.1.0 00:02:45.467 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.405 LIB libspdk_blob.a 00:02:46.405 SO libspdk_blob.so.11.0 00:02:46.664 SYMLINK libspdk_blob.so 00:02:46.664 LIB libspdk_bdev.a 00:02:46.923 SO libspdk_bdev.so.17.0 00:02:46.923 SYMLINK libspdk_bdev.so 00:02:46.923 CC lib/blobfs/blobfs.o 00:02:46.923 CC lib/blobfs/tree.o 00:02:46.923 CC lib/lvol/lvol.o 00:02:47.182 CC lib/ublk/ublk.o 00:02:47.182 CC lib/ublk/ublk_rpc.o 00:02:47.182 CC lib/ftl/ftl_core.o 00:02:47.182 CC lib/ftl/ftl_init.o 00:02:47.182 CC lib/scsi/dev.o 00:02:47.182 CC lib/nbd/nbd.o 00:02:47.182 CC lib/nvmf/ctrlr.o 00:02:47.182 CC lib/nvmf/ctrlr_discovery.o 00:02:47.182 CC lib/nvmf/ctrlr_bdev.o 00:02:47.182 CC lib/nvmf/subsystem.o 00:02:47.442 CC lib/scsi/lun.o 00:02:47.442 CC lib/ftl/ftl_layout.o 00:02:47.442 CC lib/nbd/nbd_rpc.o 00:02:47.702 CC lib/scsi/port.o 00:02:47.702 CC lib/scsi/scsi.o 00:02:47.702 LIB libspdk_nbd.a 00:02:47.702 SO libspdk_nbd.so.7.0 00:02:47.702 LIB libspdk_ublk.a 00:02:47.702 SYMLINK libspdk_nbd.so 00:02:47.702 CC lib/scsi/scsi_bdev.o 00:02:47.702 CC lib/nvmf/nvmf.o 00:02:47.702 CC lib/ftl/ftl_debug.o 00:02:47.702 SO libspdk_ublk.so.3.0 00:02:47.702 CC lib/scsi/scsi_pr.o 00:02:47.961 SYMLINK libspdk_ublk.so 00:02:47.961 CC lib/scsi/scsi_rpc.o 00:02:47.961 LIB libspdk_blobfs.a 00:02:47.961 SO libspdk_blobfs.so.10.0 00:02:47.961 SYMLINK libspdk_blobfs.so 00:02:47.961 CC lib/scsi/task.o 00:02:47.961 LIB libspdk_lvol.a 00:02:47.961 CC lib/ftl/ftl_io.o 00:02:47.961 CC lib/nvmf/nvmf_rpc.o 00:02:47.961 CC lib/ftl/ftl_sb.o 00:02:47.961 SO libspdk_lvol.so.10.0 00:02:47.961 SYMLINK libspdk_lvol.so 00:02:48.221 CC lib/ftl/ftl_l2p.o 00:02:48.221 CC lib/nvmf/transport.o 00:02:48.221 CC lib/nvmf/tcp.o 00:02:48.221 CC lib/nvmf/stubs.o 00:02:48.221 CC lib/ftl/ftl_l2p_flat.o 00:02:48.221 LIB libspdk_scsi.a 00:02:48.221 CC lib/ftl/ftl_nv_cache.o 00:02:48.221 SO libspdk_scsi.so.9.0 00:02:48.480 SYMLINK libspdk_scsi.so 00:02:48.480 CC lib/ftl/ftl_band.o 00:02:48.480 CC lib/iscsi/conn.o 00:02:48.739 CC lib/nvmf/mdns_server.o 00:02:48.739 CC lib/nvmf/rdma.o 00:02:48.739 CC lib/iscsi/init_grp.o 00:02:48.739 CC lib/iscsi/iscsi.o 00:02:48.997 CC lib/iscsi/param.o 00:02:48.997 CC lib/iscsi/portal_grp.o 00:02:48.997 CC lib/ftl/ftl_band_ops.o 00:02:48.997 CC lib/iscsi/tgt_node.o 00:02:48.997 CC lib/nvmf/auth.o 00:02:49.255 CC lib/iscsi/iscsi_subsystem.o 00:02:49.255 CC lib/ftl/ftl_writer.o 00:02:49.255 CC lib/iscsi/iscsi_rpc.o 00:02:49.255 CC lib/iscsi/task.o 00:02:49.255 CC lib/ftl/ftl_rq.o 00:02:49.515 CC lib/ftl/ftl_reloc.o 00:02:49.515 CC lib/ftl/ftl_l2p_cache.o 00:02:49.515 CC lib/ftl/ftl_p2l.o 00:02:49.515 CC lib/ftl/ftl_p2l_log.o 00:02:49.777 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.777 CC lib/vhost/vhost.o 00:02:49.777 CC lib/vhost/vhost_rpc.o 00:02:50.041 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:50.041 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.041 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:50.041 CC lib/vhost/vhost_scsi.o 00:02:50.041 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:50.041 CC lib/vhost/vhost_blk.o 00:02:50.041 CC lib/vhost/rte_vhost_user.o 00:02:50.299 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:50.299 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:50.299 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:50.558 LIB libspdk_iscsi.a 00:02:50.558 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:50.558 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:50.558 SO libspdk_iscsi.so.8.0 00:02:50.558 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:50.558 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:50.817 SYMLINK libspdk_iscsi.so 00:02:50.817 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:50.817 CC lib/ftl/utils/ftl_conf.o 00:02:50.817 CC lib/ftl/utils/ftl_md.o 00:02:50.817 CC lib/ftl/utils/ftl_mempool.o 00:02:50.817 CC lib/ftl/utils/ftl_bitmap.o 00:02:50.817 CC lib/ftl/utils/ftl_property.o 00:02:51.076 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.076 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.076 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.076 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.076 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.076 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.076 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:51.334 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.334 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.335 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.335 LIB libspdk_vhost.a 00:02:51.335 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.335 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:51.335 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:51.335 SO libspdk_vhost.so.8.0 00:02:51.335 LIB libspdk_nvmf.a 00:02:51.335 CC lib/ftl/base/ftl_base_dev.o 00:02:51.335 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.335 CC lib/ftl/ftl_trace.o 00:02:51.335 SYMLINK libspdk_vhost.so 00:02:51.593 SO libspdk_nvmf.so.20.0 00:02:51.593 LIB libspdk_ftl.a 00:02:51.593 SYMLINK libspdk_nvmf.so 00:02:51.852 SO libspdk_ftl.so.9.0 00:02:52.110 SYMLINK libspdk_ftl.so 00:02:52.678 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.678 CC module/sock/posix/posix.o 00:02:52.678 CC module/accel/ioat/accel_ioat.o 00:02:52.678 CC module/accel/error/accel_error.o 00:02:52.678 CC module/accel/iaa/accel_iaa.o 00:02:52.678 CC module/accel/dsa/accel_dsa.o 00:02:52.678 CC module/blob/bdev/blob_bdev.o 00:02:52.678 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.678 CC module/keyring/file/keyring.o 00:02:52.678 CC module/fsdev/aio/fsdev_aio.o 00:02:52.678 LIB libspdk_env_dpdk_rpc.a 00:02:52.678 SO libspdk_env_dpdk_rpc.so.6.0 00:02:52.678 SYMLINK libspdk_env_dpdk_rpc.so 00:02:52.678 CC module/accel/dsa/accel_dsa_rpc.o 00:02:52.937 CC module/keyring/file/keyring_rpc.o 00:02:52.937 CC module/accel/ioat/accel_ioat_rpc.o 00:02:52.937 CC module/accel/iaa/accel_iaa_rpc.o 00:02:52.937 LIB libspdk_scheduler_dynamic.a 00:02:52.937 CC module/accel/error/accel_error_rpc.o 00:02:52.937 SO libspdk_scheduler_dynamic.so.4.0 00:02:52.937 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:52.937 LIB libspdk_blob_bdev.a 00:02:52.937 SYMLINK libspdk_scheduler_dynamic.so 00:02:52.937 LIB libspdk_accel_dsa.a 00:02:52.937 LIB libspdk_keyring_file.a 00:02:52.937 SO libspdk_blob_bdev.so.11.0 00:02:52.937 LIB libspdk_accel_ioat.a 00:02:52.937 SO libspdk_keyring_file.so.2.0 00:02:52.937 SO libspdk_accel_dsa.so.5.0 00:02:52.937 LIB libspdk_accel_iaa.a 00:02:52.937 SO libspdk_accel_ioat.so.6.0 00:02:52.937 LIB libspdk_accel_error.a 00:02:52.937 SO libspdk_accel_iaa.so.3.0 00:02:52.937 SYMLINK libspdk_blob_bdev.so 00:02:52.937 SO libspdk_accel_error.so.2.0 00:02:52.937 SYMLINK libspdk_keyring_file.so 00:02:52.937 SYMLINK libspdk_accel_dsa.so 00:02:52.937 CC module/fsdev/aio/linux_aio_mgr.o 00:02:52.937 SYMLINK libspdk_accel_ioat.so 00:02:52.937 SYMLINK libspdk_accel_iaa.so 00:02:53.207 SYMLINK libspdk_accel_error.so 00:02:53.207 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.207 CC module/keyring/linux/keyring.o 00:02:53.207 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.207 CC module/keyring/linux/keyring_rpc.o 00:02:53.207 CC module/bdev/error/vbdev_error.o 00:02:53.207 CC module/bdev/delay/vbdev_delay.o 00:02:53.207 CC module/bdev/gpt/gpt.o 00:02:53.207 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.207 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:53.207 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.465 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.465 LIB libspdk_scheduler_gscheduler.a 00:02:53.465 LIB libspdk_keyring_linux.a 00:02:53.465 SO libspdk_scheduler_gscheduler.so.4.0 00:02:53.465 SO libspdk_keyring_linux.so.1.0 00:02:53.465 LIB libspdk_fsdev_aio.a 00:02:53.465 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.465 SO libspdk_fsdev_aio.so.1.0 00:02:53.465 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.465 SYMLINK libspdk_keyring_linux.so 00:02:53.465 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.465 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.465 LIB libspdk_sock_posix.a 00:02:53.465 CC module/bdev/malloc/bdev_malloc.o 00:02:53.465 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.465 SO libspdk_sock_posix.so.6.0 00:02:53.465 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.465 SYMLINK libspdk_fsdev_aio.so 00:02:53.465 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.723 SYMLINK libspdk_sock_posix.so 00:02:53.723 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.723 LIB libspdk_blobfs_bdev.a 00:02:53.723 CC module/bdev/null/bdev_null.o 00:02:53.723 SO libspdk_blobfs_bdev.so.6.0 00:02:53.723 LIB libspdk_bdev_error.a 00:02:53.723 CC module/bdev/null/bdev_null_rpc.o 00:02:53.723 SO libspdk_bdev_error.so.6.0 00:02:53.723 SYMLINK libspdk_blobfs_bdev.so 00:02:53.723 LIB libspdk_bdev_gpt.a 00:02:53.723 CC module/bdev/nvme/bdev_nvme.o 00:02:53.723 SO libspdk_bdev_gpt.so.6.0 00:02:53.723 SYMLINK libspdk_bdev_error.so 00:02:53.981 LIB libspdk_bdev_delay.a 00:02:53.981 SO libspdk_bdev_delay.so.6.0 00:02:53.981 SYMLINK libspdk_bdev_gpt.so 00:02:53.981 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.981 CC module/bdev/nvme/nvme_rpc.o 00:02:53.981 LIB libspdk_bdev_malloc.a 00:02:53.981 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.981 SYMLINK libspdk_bdev_delay.so 00:02:53.981 SO libspdk_bdev_malloc.so.6.0 00:02:53.981 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.981 CC module/bdev/nvme/vbdev_opal.o 00:02:53.981 CC module/bdev/raid/bdev_raid.o 00:02:53.981 LIB libspdk_bdev_null.a 00:02:53.981 SYMLINK libspdk_bdev_malloc.so 00:02:53.981 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.981 SO libspdk_bdev_null.so.6.0 00:02:53.981 LIB libspdk_bdev_lvol.a 00:02:53.981 SYMLINK libspdk_bdev_null.so 00:02:53.981 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:54.239 SO libspdk_bdev_lvol.so.6.0 00:02:54.239 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:54.239 SYMLINK libspdk_bdev_lvol.so 00:02:54.239 LIB libspdk_bdev_passthru.a 00:02:54.239 SO libspdk_bdev_passthru.so.6.0 00:02:54.239 CC module/bdev/split/vbdev_split.o 00:02:54.239 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.239 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.239 CC module/bdev/aio/bdev_aio.o 00:02:54.497 SYMLINK libspdk_bdev_passthru.so 00:02:54.497 CC module/bdev/ftl/bdev_ftl.o 00:02:54.497 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.497 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:54.497 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:54.497 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.756 CC module/bdev/aio/bdev_aio_rpc.o 00:02:54.756 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.756 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.756 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.756 LIB libspdk_bdev_split.a 00:02:54.756 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:54.756 SO libspdk_bdev_split.so.6.0 00:02:54.756 LIB libspdk_bdev_aio.a 00:02:54.756 SO libspdk_bdev_aio.so.6.0 00:02:55.015 SYMLINK libspdk_bdev_split.so 00:02:55.015 SYMLINK libspdk_bdev_aio.so 00:02:55.015 LIB libspdk_bdev_iscsi.a 00:02:55.015 LIB libspdk_bdev_zone_block.a 00:02:55.015 CC module/bdev/raid/bdev_raid_sb.o 00:02:55.015 CC module/bdev/raid/raid0.o 00:02:55.015 LIB libspdk_bdev_ftl.a 00:02:55.015 SO libspdk_bdev_zone_block.so.6.0 00:02:55.015 SO libspdk_bdev_iscsi.so.6.0 00:02:55.015 SO libspdk_bdev_ftl.so.6.0 00:02:55.015 SYMLINK libspdk_bdev_zone_block.so 00:02:55.015 SYMLINK libspdk_bdev_iscsi.so 00:02:55.015 CC module/bdev/raid/raid1.o 00:02:55.015 CC module/bdev/raid/concat.o 00:02:55.015 CC module/bdev/raid/raid5f.o 00:02:55.015 SYMLINK libspdk_bdev_ftl.so 00:02:55.015 LIB libspdk_bdev_virtio.a 00:02:55.273 SO libspdk_bdev_virtio.so.6.0 00:02:55.273 SYMLINK libspdk_bdev_virtio.so 00:02:55.530 LIB libspdk_bdev_raid.a 00:02:55.788 SO libspdk_bdev_raid.so.6.0 00:02:55.788 SYMLINK libspdk_bdev_raid.so 00:02:56.722 LIB libspdk_bdev_nvme.a 00:02:56.722 SO libspdk_bdev_nvme.so.7.1 00:02:56.980 SYMLINK libspdk_bdev_nvme.so 00:02:57.589 CC module/event/subsystems/fsdev/fsdev.o 00:02:57.589 CC module/event/subsystems/sock/sock.o 00:02:57.589 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.589 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.589 CC module/event/subsystems/keyring/keyring.o 00:02:57.589 CC module/event/subsystems/vmd/vmd.o 00:02:57.589 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.589 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:57.589 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.589 LIB libspdk_event_sock.a 00:02:57.589 LIB libspdk_event_keyring.a 00:02:57.589 LIB libspdk_event_vmd.a 00:02:57.589 LIB libspdk_event_scheduler.a 00:02:57.589 LIB libspdk_event_fsdev.a 00:02:57.589 LIB libspdk_event_iobuf.a 00:02:57.589 SO libspdk_event_sock.so.5.0 00:02:57.589 SO libspdk_event_keyring.so.1.0 00:02:57.589 SO libspdk_event_vmd.so.6.0 00:02:57.589 SO libspdk_event_scheduler.so.4.0 00:02:57.589 SO libspdk_event_fsdev.so.1.0 00:02:57.589 LIB libspdk_event_vhost_blk.a 00:02:57.589 SO libspdk_event_iobuf.so.3.0 00:02:57.589 SYMLINK libspdk_event_sock.so 00:02:57.848 SO libspdk_event_vhost_blk.so.3.0 00:02:57.848 SYMLINK libspdk_event_keyring.so 00:02:57.848 SYMLINK libspdk_event_scheduler.so 00:02:57.848 SYMLINK libspdk_event_vmd.so 00:02:57.848 SYMLINK libspdk_event_fsdev.so 00:02:57.848 SYMLINK libspdk_event_iobuf.so 00:02:57.848 SYMLINK libspdk_event_vhost_blk.so 00:02:58.106 CC module/event/subsystems/accel/accel.o 00:02:58.365 LIB libspdk_event_accel.a 00:02:58.365 SO libspdk_event_accel.so.6.0 00:02:58.365 SYMLINK libspdk_event_accel.so 00:02:58.932 CC module/event/subsystems/bdev/bdev.o 00:02:58.932 LIB libspdk_event_bdev.a 00:02:58.932 SO libspdk_event_bdev.so.6.0 00:02:59.191 SYMLINK libspdk_event_bdev.so 00:02:59.191 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.191 CC module/event/subsystems/scsi/scsi.o 00:02:59.191 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.451 CC module/event/subsystems/ublk/ublk.o 00:02:59.451 CC module/event/subsystems/nbd/nbd.o 00:02:59.451 LIB libspdk_event_scsi.a 00:02:59.451 LIB libspdk_event_nbd.a 00:02:59.451 SO libspdk_event_scsi.so.6.0 00:02:59.451 SO libspdk_event_nbd.so.6.0 00:02:59.451 LIB libspdk_event_ublk.a 00:02:59.451 SO libspdk_event_ublk.so.3.0 00:02:59.451 SYMLINK libspdk_event_scsi.so 00:02:59.451 LIB libspdk_event_nvmf.a 00:02:59.451 SYMLINK libspdk_event_nbd.so 00:02:59.712 SO libspdk_event_nvmf.so.6.0 00:02:59.712 SYMLINK libspdk_event_ublk.so 00:02:59.712 SYMLINK libspdk_event_nvmf.so 00:02:59.972 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.972 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.972 LIB libspdk_event_vhost_scsi.a 00:02:59.972 SO libspdk_event_vhost_scsi.so.3.0 00:02:59.972 LIB libspdk_event_iscsi.a 00:02:59.972 SO libspdk_event_iscsi.so.6.0 00:03:00.231 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.231 SYMLINK libspdk_event_iscsi.so 00:03:00.231 SO libspdk.so.6.0 00:03:00.490 SYMLINK libspdk.so 00:03:00.749 CC app/trace_record/trace_record.o 00:03:00.749 CC app/spdk_nvme_identify/identify.o 00:03:00.749 CXX app/trace/trace.o 00:03:00.749 CC app/spdk_nvme_perf/perf.o 00:03:00.749 CC app/spdk_lspci/spdk_lspci.o 00:03:00.749 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.749 CC app/nvmf_tgt/nvmf_main.o 00:03:00.749 CC app/spdk_tgt/spdk_tgt.o 00:03:00.749 CC test/thread/poller_perf/poller_perf.o 00:03:00.749 CC examples/util/zipf/zipf.o 00:03:00.749 LINK spdk_lspci 00:03:01.009 LINK nvmf_tgt 00:03:01.009 LINK poller_perf 00:03:01.009 LINK iscsi_tgt 00:03:01.009 LINK zipf 00:03:01.009 LINK spdk_tgt 00:03:01.009 LINK spdk_trace_record 00:03:01.009 CC app/spdk_nvme_discover/discovery_aer.o 00:03:01.009 LINK spdk_trace 00:03:01.268 CC app/spdk_top/spdk_top.o 00:03:01.268 CC app/spdk_dd/spdk_dd.o 00:03:01.268 CC examples/ioat/perf/perf.o 00:03:01.268 LINK spdk_nvme_discover 00:03:01.268 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.268 CC test/dma/test_dma/test_dma.o 00:03:01.268 CC test/app/bdev_svc/bdev_svc.o 00:03:01.528 LINK lsvmd 00:03:01.528 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:01.528 LINK ioat_perf 00:03:01.528 CC test/app/histogram_perf/histogram_perf.o 00:03:01.528 LINK bdev_svc 00:03:01.788 LINK spdk_nvme_perf 00:03:01.788 LINK spdk_dd 00:03:01.788 CC examples/vmd/led/led.o 00:03:01.788 LINK spdk_nvme_identify 00:03:01.788 LINK histogram_perf 00:03:01.788 CC examples/ioat/verify/verify.o 00:03:01.788 LINK led 00:03:01.788 LINK test_dma 00:03:01.788 CC test/app/jsoncat/jsoncat.o 00:03:02.049 LINK nvme_fuzz 00:03:02.049 CC test/app/stub/stub.o 00:03:02.049 TEST_HEADER include/spdk/accel.h 00:03:02.049 TEST_HEADER include/spdk/accel_module.h 00:03:02.049 TEST_HEADER include/spdk/assert.h 00:03:02.049 TEST_HEADER include/spdk/barrier.h 00:03:02.049 TEST_HEADER include/spdk/base64.h 00:03:02.049 TEST_HEADER include/spdk/bdev.h 00:03:02.049 TEST_HEADER include/spdk/bdev_module.h 00:03:02.049 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.049 TEST_HEADER include/spdk/bit_array.h 00:03:02.049 TEST_HEADER include/spdk/bit_pool.h 00:03:02.049 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.049 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.049 TEST_HEADER include/spdk/blobfs.h 00:03:02.049 LINK verify 00:03:02.049 TEST_HEADER include/spdk/blob.h 00:03:02.049 TEST_HEADER include/spdk/conf.h 00:03:02.049 TEST_HEADER include/spdk/config.h 00:03:02.049 TEST_HEADER include/spdk/cpuset.h 00:03:02.049 TEST_HEADER include/spdk/crc16.h 00:03:02.049 TEST_HEADER include/spdk/crc32.h 00:03:02.049 TEST_HEADER include/spdk/crc64.h 00:03:02.049 TEST_HEADER include/spdk/dif.h 00:03:02.049 TEST_HEADER include/spdk/dma.h 00:03:02.049 TEST_HEADER include/spdk/endian.h 00:03:02.049 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.049 LINK jsoncat 00:03:02.049 TEST_HEADER include/spdk/env.h 00:03:02.049 TEST_HEADER include/spdk/event.h 00:03:02.049 TEST_HEADER include/spdk/fd_group.h 00:03:02.049 TEST_HEADER include/spdk/fd.h 00:03:02.049 TEST_HEADER include/spdk/file.h 00:03:02.049 TEST_HEADER include/spdk/fsdev.h 00:03:02.049 TEST_HEADER include/spdk/fsdev_module.h 00:03:02.049 TEST_HEADER include/spdk/ftl.h 00:03:02.049 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:02.049 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.049 TEST_HEADER include/spdk/hexlify.h 00:03:02.049 TEST_HEADER include/spdk/histogram_data.h 00:03:02.049 TEST_HEADER include/spdk/idxd.h 00:03:02.049 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.049 TEST_HEADER include/spdk/init.h 00:03:02.049 TEST_HEADER include/spdk/ioat.h 00:03:02.049 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.049 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.049 CC test/event/event_perf/event_perf.o 00:03:02.049 TEST_HEADER include/spdk/json.h 00:03:02.049 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.049 TEST_HEADER include/spdk/keyring.h 00:03:02.049 TEST_HEADER include/spdk/keyring_module.h 00:03:02.049 TEST_HEADER include/spdk/likely.h 00:03:02.049 TEST_HEADER include/spdk/log.h 00:03:02.049 TEST_HEADER include/spdk/lvol.h 00:03:02.049 TEST_HEADER include/spdk/md5.h 00:03:02.049 TEST_HEADER include/spdk/memory.h 00:03:02.049 TEST_HEADER include/spdk/mmio.h 00:03:02.049 TEST_HEADER include/spdk/nbd.h 00:03:02.049 TEST_HEADER include/spdk/net.h 00:03:02.049 TEST_HEADER include/spdk/notify.h 00:03:02.049 TEST_HEADER include/spdk/nvme.h 00:03:02.049 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.049 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.049 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.049 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.049 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.049 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.049 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.049 TEST_HEADER include/spdk/nvmf.h 00:03:02.049 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.049 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.049 TEST_HEADER include/spdk/opal.h 00:03:02.049 TEST_HEADER include/spdk/opal_spec.h 00:03:02.049 TEST_HEADER include/spdk/pci_ids.h 00:03:02.049 TEST_HEADER include/spdk/pipe.h 00:03:02.049 TEST_HEADER include/spdk/queue.h 00:03:02.049 TEST_HEADER include/spdk/reduce.h 00:03:02.049 TEST_HEADER include/spdk/rpc.h 00:03:02.049 TEST_HEADER include/spdk/scheduler.h 00:03:02.049 TEST_HEADER include/spdk/scsi.h 00:03:02.049 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.049 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.049 TEST_HEADER include/spdk/sock.h 00:03:02.049 TEST_HEADER include/spdk/stdinc.h 00:03:02.049 TEST_HEADER include/spdk/string.h 00:03:02.049 TEST_HEADER include/spdk/thread.h 00:03:02.049 TEST_HEADER include/spdk/trace.h 00:03:02.049 TEST_HEADER include/spdk/trace_parser.h 00:03:02.049 TEST_HEADER include/spdk/tree.h 00:03:02.049 TEST_HEADER include/spdk/ublk.h 00:03:02.049 TEST_HEADER include/spdk/util.h 00:03:02.049 TEST_HEADER include/spdk/uuid.h 00:03:02.049 LINK stub 00:03:02.049 TEST_HEADER include/spdk/version.h 00:03:02.049 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.049 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.049 TEST_HEADER include/spdk/vhost.h 00:03:02.049 TEST_HEADER include/spdk/vmd.h 00:03:02.049 TEST_HEADER include/spdk/xor.h 00:03:02.049 TEST_HEADER include/spdk/zipf.h 00:03:02.049 CXX test/cpp_headers/accel.o 00:03:02.310 CXX test/cpp_headers/accel_module.o 00:03:02.310 CC examples/idxd/perf/perf.o 00:03:02.310 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.310 LINK event_perf 00:03:02.310 CC app/fio/nvme/fio_plugin.o 00:03:02.310 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.310 LINK spdk_top 00:03:02.310 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.310 CXX test/cpp_headers/assert.o 00:03:02.310 CC test/event/reactor/reactor.o 00:03:02.570 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.570 CC test/event/reactor_perf/reactor_perf.o 00:03:02.570 CXX test/cpp_headers/barrier.o 00:03:02.570 CC test/event/app_repeat/app_repeat.o 00:03:02.570 LINK idxd_perf 00:03:02.570 LINK reactor 00:03:02.570 LINK reactor_perf 00:03:02.570 CXX test/cpp_headers/base64.o 00:03:02.830 LINK mem_callbacks 00:03:02.830 LINK app_repeat 00:03:02.830 CXX test/cpp_headers/bdev.o 00:03:02.830 LINK interrupt_tgt 00:03:02.830 LINK vhost_fuzz 00:03:02.830 CXX test/cpp_headers/bdev_module.o 00:03:02.830 LINK spdk_nvme 00:03:02.830 CC test/event/scheduler/scheduler.o 00:03:02.830 CC test/env/vtophys/vtophys.o 00:03:02.830 CXX test/cpp_headers/bdev_zone.o 00:03:03.090 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:03.090 CXX test/cpp_headers/bit_array.o 00:03:03.090 CC app/fio/bdev/fio_plugin.o 00:03:03.090 LINK vtophys 00:03:03.090 CC app/vhost/vhost.o 00:03:03.090 CXX test/cpp_headers/bit_pool.o 00:03:03.090 LINK scheduler 00:03:03.090 CC examples/thread/thread/thread_ex.o 00:03:03.090 CXX test/cpp_headers/blob_bdev.o 00:03:03.090 LINK env_dpdk_post_init 00:03:03.350 CC examples/sock/hello_world/hello_sock.o 00:03:03.350 LINK vhost 00:03:03.350 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.350 CC test/rpc_client/rpc_client_test.o 00:03:03.350 CC test/env/memory/memory_ut.o 00:03:03.350 LINK thread 00:03:03.611 CC test/accel/dif/dif.o 00:03:03.611 LINK hello_sock 00:03:03.611 CXX test/cpp_headers/blobfs.o 00:03:03.611 LINK rpc_client_test 00:03:03.611 CC test/blobfs/mkfs/mkfs.o 00:03:03.611 LINK spdk_bdev 00:03:03.611 CC test/env/pci/pci_ut.o 00:03:03.611 CXX test/cpp_headers/blob.o 00:03:03.871 LINK mkfs 00:03:03.871 CXX test/cpp_headers/conf.o 00:03:03.871 CC examples/accel/perf/accel_perf.o 00:03:03.871 CC examples/nvme/hello_world/hello_world.o 00:03:03.871 CC examples/blob/hello_world/hello_blob.o 00:03:03.871 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:04.131 CXX test/cpp_headers/config.o 00:03:04.131 CXX test/cpp_headers/cpuset.o 00:03:04.131 LINK pci_ut 00:03:04.131 LINK hello_world 00:03:04.131 CXX test/cpp_headers/crc16.o 00:03:04.131 LINK hello_blob 00:03:04.391 CC test/lvol/esnap/esnap.o 00:03:04.391 LINK hello_fsdev 00:03:04.391 LINK iscsi_fuzz 00:03:04.391 LINK dif 00:03:04.391 CXX test/cpp_headers/crc32.o 00:03:04.391 CXX test/cpp_headers/crc64.o 00:03:04.391 CC examples/nvme/reconnect/reconnect.o 00:03:04.391 LINK accel_perf 00:03:04.663 CXX test/cpp_headers/dif.o 00:03:04.663 CC examples/blob/cli/blobcli.o 00:03:04.663 CXX test/cpp_headers/dma.o 00:03:04.663 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.663 CC examples/nvme/arbitration/arbitration.o 00:03:04.663 CXX test/cpp_headers/endian.o 00:03:04.663 CC test/nvme/aer/aer.o 00:03:04.663 LINK memory_ut 00:03:04.950 LINK reconnect 00:03:04.950 CXX test/cpp_headers/env_dpdk.o 00:03:04.950 CC test/bdev/bdevio/bdevio.o 00:03:04.950 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.950 CXX test/cpp_headers/env.o 00:03:04.950 LINK arbitration 00:03:05.222 LINK aer 00:03:05.222 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.222 CC test/nvme/reset/reset.o 00:03:05.222 LINK blobcli 00:03:05.222 LINK hello_bdev 00:03:05.222 CXX test/cpp_headers/event.o 00:03:05.222 LINK nvme_manage 00:03:05.222 CC examples/nvme/hotplug/hotplug.o 00:03:05.222 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.482 LINK bdevio 00:03:05.482 CXX test/cpp_headers/fd_group.o 00:03:05.482 CC examples/nvme/abort/abort.o 00:03:05.482 LINK reset 00:03:05.482 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:05.482 CC test/nvme/sgl/sgl.o 00:03:05.482 LINK cmb_copy 00:03:05.482 CXX test/cpp_headers/fd.o 00:03:05.482 CXX test/cpp_headers/file.o 00:03:05.482 LINK hotplug 00:03:05.741 LINK pmr_persistence 00:03:05.741 CC test/nvme/e2edp/nvme_dp.o 00:03:05.741 CXX test/cpp_headers/fsdev.o 00:03:05.741 CXX test/cpp_headers/fsdev_module.o 00:03:05.741 CC test/nvme/overhead/overhead.o 00:03:05.741 CC test/nvme/err_injection/err_injection.o 00:03:05.741 LINK sgl 00:03:05.741 LINK abort 00:03:05.741 CXX test/cpp_headers/ftl.o 00:03:06.001 CXX test/cpp_headers/fuse_dispatcher.o 00:03:06.001 LINK nvme_dp 00:03:06.001 CC test/nvme/startup/startup.o 00:03:06.001 LINK err_injection 00:03:06.001 CXX test/cpp_headers/gpt_spec.o 00:03:06.001 CC test/nvme/reserve/reserve.o 00:03:06.001 LINK bdevperf 00:03:06.001 CC test/nvme/simple_copy/simple_copy.o 00:03:06.261 LINK overhead 00:03:06.261 LINK startup 00:03:06.261 CC test/nvme/connect_stress/connect_stress.o 00:03:06.261 CXX test/cpp_headers/hexlify.o 00:03:06.261 CC test/nvme/boot_partition/boot_partition.o 00:03:06.261 CC test/nvme/compliance/nvme_compliance.o 00:03:06.261 LINK reserve 00:03:06.519 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.519 LINK simple_copy 00:03:06.519 CXX test/cpp_headers/histogram_data.o 00:03:06.519 LINK connect_stress 00:03:06.519 LINK boot_partition 00:03:06.519 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.519 CC examples/nvmf/nvmf/nvmf.o 00:03:06.519 CC test/nvme/fdp/fdp.o 00:03:06.519 CXX test/cpp_headers/idxd.o 00:03:06.519 CXX test/cpp_headers/idxd_spec.o 00:03:06.519 LINK fused_ordering 00:03:06.519 CXX test/cpp_headers/init.o 00:03:06.777 LINK doorbell_aers 00:03:06.777 LINK nvme_compliance 00:03:06.777 CC test/nvme/cuse/cuse.o 00:03:06.777 CXX test/cpp_headers/ioat.o 00:03:06.777 CXX test/cpp_headers/ioat_spec.o 00:03:06.777 CXX test/cpp_headers/iscsi_spec.o 00:03:06.777 CXX test/cpp_headers/json.o 00:03:06.777 CXX test/cpp_headers/jsonrpc.o 00:03:06.777 CXX test/cpp_headers/keyring.o 00:03:07.037 LINK nvmf 00:03:07.037 CXX test/cpp_headers/keyring_module.o 00:03:07.037 CXX test/cpp_headers/likely.o 00:03:07.037 CXX test/cpp_headers/log.o 00:03:07.037 CXX test/cpp_headers/lvol.o 00:03:07.037 LINK fdp 00:03:07.037 CXX test/cpp_headers/md5.o 00:03:07.037 CXX test/cpp_headers/memory.o 00:03:07.037 CXX test/cpp_headers/mmio.o 00:03:07.037 CXX test/cpp_headers/nbd.o 00:03:07.037 CXX test/cpp_headers/net.o 00:03:07.297 CXX test/cpp_headers/notify.o 00:03:07.297 CXX test/cpp_headers/nvme.o 00:03:07.297 CXX test/cpp_headers/nvme_intel.o 00:03:07.297 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.297 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.297 CXX test/cpp_headers/nvme_spec.o 00:03:07.297 CXX test/cpp_headers/nvme_zns.o 00:03:07.297 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.297 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.297 CXX test/cpp_headers/nvmf.o 00:03:07.297 CXX test/cpp_headers/nvmf_spec.o 00:03:07.297 CXX test/cpp_headers/nvmf_transport.o 00:03:07.297 CXX test/cpp_headers/opal.o 00:03:07.557 CXX test/cpp_headers/opal_spec.o 00:03:07.557 CXX test/cpp_headers/pci_ids.o 00:03:07.557 CXX test/cpp_headers/pipe.o 00:03:07.557 CXX test/cpp_headers/queue.o 00:03:07.557 CXX test/cpp_headers/reduce.o 00:03:07.557 CXX test/cpp_headers/rpc.o 00:03:07.557 CXX test/cpp_headers/scheduler.o 00:03:07.557 CXX test/cpp_headers/scsi.o 00:03:07.557 CXX test/cpp_headers/scsi_spec.o 00:03:07.557 CXX test/cpp_headers/sock.o 00:03:07.816 CXX test/cpp_headers/stdinc.o 00:03:07.816 CXX test/cpp_headers/string.o 00:03:07.816 CXX test/cpp_headers/thread.o 00:03:07.816 CXX test/cpp_headers/trace.o 00:03:07.816 CXX test/cpp_headers/trace_parser.o 00:03:07.816 CXX test/cpp_headers/tree.o 00:03:07.816 CXX test/cpp_headers/ublk.o 00:03:07.816 CXX test/cpp_headers/util.o 00:03:07.816 CXX test/cpp_headers/uuid.o 00:03:07.816 CXX test/cpp_headers/version.o 00:03:07.816 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.816 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.816 CXX test/cpp_headers/vhost.o 00:03:07.816 CXX test/cpp_headers/vmd.o 00:03:07.816 CXX test/cpp_headers/xor.o 00:03:08.076 CXX test/cpp_headers/zipf.o 00:03:08.336 LINK cuse 00:03:10.874 LINK esnap 00:03:11.133 00:03:11.133 real 1m21.535s 00:03:11.133 user 7m24.389s 00:03:11.133 sys 1m27.637s 00:03:11.133 03:10:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.133 03:10:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.133 ************************************ 00:03:11.133 END TEST make 00:03:11.133 ************************************ 00:03:11.133 03:10:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.133 03:10:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.133 03:10:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.133 03:10:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.133 03:10:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.133 03:10:00 -- pm/common@44 -- $ pid=5476 00:03:11.133 03:10:00 -- pm/common@50 -- $ kill -TERM 5476 00:03:11.133 03:10:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.133 03:10:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.133 03:10:00 -- pm/common@44 -- $ pid=5478 00:03:11.133 03:10:00 -- pm/common@50 -- $ kill -TERM 5478 00:03:11.133 03:10:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.133 03:10:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:11.133 03:10:00 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.133 03:10:00 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.133 03:10:00 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.392 03:10:00 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.392 03:10:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.392 03:10:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.392 03:10:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.392 03:10:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.392 03:10:00 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.392 03:10:00 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.392 03:10:00 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.392 03:10:00 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.392 03:10:00 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.392 03:10:00 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.392 03:10:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.392 03:10:00 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.392 03:10:00 -- scripts/common.sh@345 -- # : 1 00:03:11.392 03:10:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.392 03:10:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.392 03:10:00 -- scripts/common.sh@365 -- # decimal 1 00:03:11.392 03:10:00 -- scripts/common.sh@353 -- # local d=1 00:03:11.392 03:10:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.392 03:10:00 -- scripts/common.sh@355 -- # echo 1 00:03:11.392 03:10:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.392 03:10:00 -- scripts/common.sh@366 -- # decimal 2 00:03:11.392 03:10:00 -- scripts/common.sh@353 -- # local d=2 00:03:11.392 03:10:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.393 03:10:00 -- scripts/common.sh@355 -- # echo 2 00:03:11.393 03:10:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.393 03:10:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.393 03:10:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.393 03:10:00 -- scripts/common.sh@368 -- # return 0 00:03:11.393 03:10:00 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.393 03:10:00 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.393 --rc genhtml_branch_coverage=1 00:03:11.393 --rc genhtml_function_coverage=1 00:03:11.393 --rc genhtml_legend=1 00:03:11.393 --rc geninfo_all_blocks=1 00:03:11.393 --rc geninfo_unexecuted_blocks=1 00:03:11.393 00:03:11.393 ' 00:03:11.393 03:10:00 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.393 --rc genhtml_branch_coverage=1 00:03:11.393 --rc genhtml_function_coverage=1 00:03:11.393 --rc genhtml_legend=1 00:03:11.393 --rc geninfo_all_blocks=1 00:03:11.393 --rc geninfo_unexecuted_blocks=1 00:03:11.393 00:03:11.393 ' 00:03:11.393 03:10:00 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.393 --rc genhtml_branch_coverage=1 00:03:11.393 --rc genhtml_function_coverage=1 00:03:11.393 --rc genhtml_legend=1 00:03:11.393 --rc geninfo_all_blocks=1 00:03:11.393 --rc geninfo_unexecuted_blocks=1 00:03:11.393 00:03:11.393 ' 00:03:11.393 03:10:00 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.393 --rc genhtml_branch_coverage=1 00:03:11.393 --rc genhtml_function_coverage=1 00:03:11.393 --rc genhtml_legend=1 00:03:11.393 --rc geninfo_all_blocks=1 00:03:11.393 --rc geninfo_unexecuted_blocks=1 00:03:11.393 00:03:11.393 ' 00:03:11.393 03:10:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:11.393 03:10:00 -- nvmf/common.sh@7 -- # uname -s 00:03:11.393 03:10:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.393 03:10:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.393 03:10:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.393 03:10:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.393 03:10:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.393 03:10:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.393 03:10:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.393 03:10:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.393 03:10:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.393 03:10:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.393 03:10:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd400a4-2768-44b2-aa0b-0edb23284369 00:03:11.393 03:10:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=1dd400a4-2768-44b2-aa0b-0edb23284369 00:03:11.393 03:10:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.393 03:10:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.393 03:10:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:11.393 03:10:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.393 03:10:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.393 03:10:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.393 03:10:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.393 03:10:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.393 03:10:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.393 03:10:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.393 03:10:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.393 03:10:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.393 03:10:00 -- paths/export.sh@5 -- # export PATH 00:03:11.393 03:10:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.393 03:10:00 -- nvmf/common.sh@51 -- # : 0 00:03:11.393 03:10:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.393 03:10:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.393 03:10:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.393 03:10:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.393 03:10:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.393 03:10:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.393 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.393 03:10:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.393 03:10:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.393 03:10:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.393 03:10:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.393 03:10:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.393 03:10:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.393 03:10:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.393 03:10:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.393 03:10:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.393 03:10:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.393 03:10:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.393 03:10:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.393 03:10:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.393 03:10:00 -- spdk/autotest.sh@48 -- # udevadm_pid=54404 00:03:11.393 03:10:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.393 03:10:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.393 03:10:00 -- pm/common@17 -- # local monitor 00:03:11.393 03:10:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.393 03:10:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.393 03:10:00 -- pm/common@25 -- # sleep 1 00:03:11.393 03:10:00 -- pm/common@21 -- # date +%s 00:03:11.393 03:10:00 -- pm/common@21 -- # date +%s 00:03:11.393 03:10:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732072200 00:03:11.393 03:10:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732072200 00:03:11.393 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732072200_collect-vmstat.pm.log 00:03:11.393 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732072200_collect-cpu-load.pm.log 00:03:12.366 03:10:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.366 03:10:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.366 03:10:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.366 03:10:01 -- common/autotest_common.sh@10 -- # set +x 00:03:12.366 03:10:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.366 03:10:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.366 03:10:01 -- common/autotest_common.sh@10 -- # set +x 00:03:12.627 03:10:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.627 03:10:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.627 03:10:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.627 03:10:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.627 03:10:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.627 03:10:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.627 03:10:02 -- common/autotest_common.sh@1457 -- # uname 00:03:12.627 03:10:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.627 03:10:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.627 03:10:02 -- common/autotest_common.sh@1477 -- # uname 00:03:12.627 03:10:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.627 03:10:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.627 03:10:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.627 lcov: LCOV version 1.15 00:03:12.627 03:10:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.514 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.416 03:10:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:42.416 03:10:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.416 03:10:30 -- common/autotest_common.sh@10 -- # set +x 00:03:42.416 03:10:30 -- spdk/autotest.sh@78 -- # rm -f 00:03:42.416 03:10:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.416 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:42.416 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:42.416 03:10:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:42.416 03:10:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:42.417 03:10:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:42.417 03:10:31 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:42.417 03:10:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.417 03:10:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:42.417 03:10:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:42.417 03:10:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.417 03:10:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:42.417 03:10:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:42.417 03:10:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.417 03:10:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:42.417 03:10:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:42.417 03:10:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.417 03:10:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:42.417 03:10:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:42.417 03:10:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:42.417 03:10:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.417 03:10:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:42.417 03:10:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.417 03:10:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.417 03:10:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:42.417 03:10:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:42.417 03:10:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.417 No valid GPT data, bailing 00:03:42.417 03:10:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.417 03:10:31 -- scripts/common.sh@394 -- # pt= 00:03:42.417 03:10:31 -- scripts/common.sh@395 -- # return 1 00:03:42.417 03:10:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.417 1+0 records in 00:03:42.417 1+0 records out 00:03:42.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627259 s, 167 MB/s 00:03:42.417 03:10:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.417 03:10:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.417 03:10:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:42.417 03:10:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:42.417 03:10:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:42.417 No valid GPT data, bailing 00:03:42.417 03:10:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:42.417 03:10:31 -- scripts/common.sh@394 -- # pt= 00:03:42.417 03:10:31 -- scripts/common.sh@395 -- # return 1 00:03:42.417 03:10:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:42.417 1+0 records in 00:03:42.417 1+0 records out 00:03:42.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408879 s, 256 MB/s 00:03:42.417 03:10:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.417 03:10:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.417 03:10:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:42.417 03:10:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:42.417 03:10:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:42.417 No valid GPT data, bailing 00:03:42.417 03:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:42.417 03:10:32 -- scripts/common.sh@394 -- # pt= 00:03:42.417 03:10:32 -- scripts/common.sh@395 -- # return 1 00:03:42.417 03:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:42.417 1+0 records in 00:03:42.417 1+0 records out 00:03:42.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432433 s, 242 MB/s 00:03:42.417 03:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.677 03:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.677 03:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:42.677 03:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:42.677 03:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:42.677 No valid GPT data, bailing 00:03:42.677 03:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:42.677 03:10:32 -- scripts/common.sh@394 -- # pt= 00:03:42.677 03:10:32 -- scripts/common.sh@395 -- # return 1 00:03:42.677 03:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:42.677 1+0 records in 00:03:42.677 1+0 records out 00:03:42.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606163 s, 173 MB/s 00:03:42.677 03:10:32 -- spdk/autotest.sh@105 -- # sync 00:03:42.677 03:10:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.677 03:10:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.677 03:10:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.244 03:10:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.244 03:10:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.244 03:10:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.244 03:10:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:46.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.182 Hugepages 00:03:46.182 node hugesize free / total 00:03:46.182 node0 1048576kB 0 / 0 00:03:46.182 node0 2048kB 0 / 0 00:03:46.182 00:03:46.182 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.182 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:46.443 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:46.443 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:46.443 03:10:35 -- spdk/autotest.sh@117 -- # uname -s 00:03:46.443 03:10:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:46.443 03:10:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:46.443 03:10:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.380 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.380 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.380 03:10:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:48.314 03:10:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:48.314 03:10:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:48.314 03:10:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.314 03:10:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:48.314 03:10:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.314 03:10:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.314 03:10:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.314 03:10:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:48.314 03:10:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.571 03:10:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:48.571 03:10:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:48.571 03:10:38 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.088 Waiting for block devices as requested 00:03:49.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.088 03:10:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.088 03:10:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:49.088 03:10:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:49.088 03:10:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:49.088 03:10:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.088 03:10:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:49.347 03:10:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.347 03:10:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:49.347 03:10:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:49.347 03:10:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:49.347 03:10:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:49.347 03:10:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.347 03:10:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.347 03:10:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.347 03:10:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.347 03:10:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.347 03:10:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.347 03:10:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:49.347 03:10:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.347 03:10:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.347 03:10:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.347 03:10:38 -- common/autotest_common.sh@1543 -- # continue 00:03:49.347 03:10:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.347 03:10:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:49.347 03:10:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:49.347 03:10:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:49.348 03:10:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.348 03:10:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:49.348 03:10:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.348 03:10:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:49.348 03:10:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:49.348 03:10:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:49.348 03:10:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:49.348 03:10:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.348 03:10:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.348 03:10:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.348 03:10:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.348 03:10:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.348 03:10:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:49.348 03:10:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.348 03:10:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.348 03:10:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.348 03:10:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.348 03:10:38 -- common/autotest_common.sh@1543 -- # continue 00:03:49.348 03:10:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:49.348 03:10:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.348 03:10:38 -- common/autotest_common.sh@10 -- # set +x 00:03:49.348 03:10:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:49.348 03:10:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.348 03:10:38 -- common/autotest_common.sh@10 -- # set +x 00:03:49.348 03:10:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.285 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.285 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.285 03:10:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:50.285 03:10:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.285 03:10:39 -- common/autotest_common.sh@10 -- # set +x 00:03:50.285 03:10:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:50.285 03:10:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:50.544 03:10:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.544 03:10:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:50.544 03:10:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:50.544 03:10:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:50.544 03:10:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:50.544 03:10:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:50.544 03:10:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:50.544 03:10:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:50.544 03:10:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.544 03:10:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:50.544 03:10:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:50.544 03:10:40 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:50.544 03:10:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:50.544 03:10:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.544 03:10:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:50.544 03:10:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.544 03:10:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.544 03:10:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.544 03:10:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:50.544 03:10:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.544 03:10:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.544 03:10:40 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:50.544 03:10:40 -- common/autotest_common.sh@1572 -- # return 0 00:03:50.544 03:10:40 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:50.544 03:10:40 -- common/autotest_common.sh@1580 -- # return 0 00:03:50.544 03:10:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.544 03:10:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.544 03:10:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.544 03:10:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.544 03:10:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.544 03:10:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.544 03:10:40 -- common/autotest_common.sh@10 -- # set +x 00:03:50.544 03:10:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.544 03:10:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.544 03:10:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.544 03:10:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.544 03:10:40 -- common/autotest_common.sh@10 -- # set +x 00:03:50.544 ************************************ 00:03:50.544 START TEST env 00:03:50.544 ************************************ 00:03:50.544 03:10:40 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.544 * Looking for test storage... 00:03:50.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:50.544 03:10:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.804 03:10:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.804 03:10:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.804 03:10:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.804 03:10:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.804 03:10:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.804 03:10:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.804 03:10:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.804 03:10:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.804 03:10:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.804 03:10:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.804 03:10:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.804 03:10:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.804 03:10:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.804 03:10:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.804 03:10:40 env -- scripts/common.sh@344 -- # case "$op" in 00:03:50.805 03:10:40 env -- scripts/common.sh@345 -- # : 1 00:03:50.805 03:10:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.805 03:10:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.805 03:10:40 env -- scripts/common.sh@365 -- # decimal 1 00:03:50.805 03:10:40 env -- scripts/common.sh@353 -- # local d=1 00:03:50.805 03:10:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.805 03:10:40 env -- scripts/common.sh@355 -- # echo 1 00:03:50.805 03:10:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.805 03:10:40 env -- scripts/common.sh@366 -- # decimal 2 00:03:50.805 03:10:40 env -- scripts/common.sh@353 -- # local d=2 00:03:50.805 03:10:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.805 03:10:40 env -- scripts/common.sh@355 -- # echo 2 00:03:50.805 03:10:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.805 03:10:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.805 03:10:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.805 03:10:40 env -- scripts/common.sh@368 -- # return 0 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.805 --rc genhtml_branch_coverage=1 00:03:50.805 --rc genhtml_function_coverage=1 00:03:50.805 --rc genhtml_legend=1 00:03:50.805 --rc geninfo_all_blocks=1 00:03:50.805 --rc geninfo_unexecuted_blocks=1 00:03:50.805 00:03:50.805 ' 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.805 --rc genhtml_branch_coverage=1 00:03:50.805 --rc genhtml_function_coverage=1 00:03:50.805 --rc genhtml_legend=1 00:03:50.805 --rc geninfo_all_blocks=1 00:03:50.805 --rc geninfo_unexecuted_blocks=1 00:03:50.805 00:03:50.805 ' 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.805 --rc genhtml_branch_coverage=1 00:03:50.805 --rc genhtml_function_coverage=1 00:03:50.805 --rc genhtml_legend=1 00:03:50.805 --rc geninfo_all_blocks=1 00:03:50.805 --rc geninfo_unexecuted_blocks=1 00:03:50.805 00:03:50.805 ' 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.805 --rc genhtml_branch_coverage=1 00:03:50.805 --rc genhtml_function_coverage=1 00:03:50.805 --rc genhtml_legend=1 00:03:50.805 --rc geninfo_all_blocks=1 00:03:50.805 --rc geninfo_unexecuted_blocks=1 00:03:50.805 00:03:50.805 ' 00:03:50.805 03:10:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.805 03:10:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.805 03:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.805 ************************************ 00:03:50.805 START TEST env_memory 00:03:50.805 ************************************ 00:03:50.805 03:10:40 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.805 00:03:50.805 00:03:50.805 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.805 http://cunit.sourceforge.net/ 00:03:50.805 00:03:50.805 00:03:50.805 Suite: memory 00:03:50.805 Test: alloc and free memory map ...[2024-11-20 03:10:40.356308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.805 passed 00:03:50.805 Test: mem map translation ...[2024-11-20 03:10:40.399816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.805 [2024-11-20 03:10:40.399873] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.805 [2024-11-20 03:10:40.399937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.805 [2024-11-20 03:10:40.399958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:51.064 passed 00:03:51.064 Test: mem map registration ...[2024-11-20 03:10:40.467378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:51.064 [2024-11-20 03:10:40.467425] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:51.064 passed 00:03:51.064 Test: mem map adjacent registrations ...passed 00:03:51.064 00:03:51.064 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.064 suites 1 1 n/a 0 0 00:03:51.064 tests 4 4 4 0 0 00:03:51.064 asserts 152 152 152 0 n/a 00:03:51.064 00:03:51.064 Elapsed time = 0.242 seconds 00:03:51.064 00:03:51.064 real 0m0.294s 00:03:51.064 user 0m0.259s 00:03:51.064 sys 0m0.023s 00:03:51.064 03:10:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.065 03:10:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:51.065 ************************************ 00:03:51.065 END TEST env_memory 00:03:51.065 ************************************ 00:03:51.065 03:10:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:51.065 03:10:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.065 03:10:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.065 03:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.065 ************************************ 00:03:51.065 START TEST env_vtophys 00:03:51.065 ************************************ 00:03:51.065 03:10:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:51.065 EAL: lib.eal log level changed from notice to debug 00:03:51.065 EAL: Detected lcore 0 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 1 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 2 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 3 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 4 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 5 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 6 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 7 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 8 as core 0 on socket 0 00:03:51.065 EAL: Detected lcore 9 as core 0 on socket 0 00:03:51.065 EAL: Maximum logical cores by configuration: 128 00:03:51.065 EAL: Detected CPU lcores: 10 00:03:51.065 EAL: Detected NUMA nodes: 1 00:03:51.065 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:51.065 EAL: Detected shared linkage of DPDK 00:03:51.324 EAL: No shared files mode enabled, IPC will be disabled 00:03:51.324 EAL: Selected IOVA mode 'PA' 00:03:51.324 EAL: Probing VFIO support... 00:03:51.324 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:51.324 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:51.324 EAL: Ask a virtual area of 0x2e000 bytes 00:03:51.324 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:51.324 EAL: Setting up physically contiguous memory... 00:03:51.324 EAL: Setting maximum number of open files to 524288 00:03:51.324 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:51.324 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:51.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.324 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:51.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.324 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:51.324 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:51.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.324 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:51.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.324 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:51.324 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:51.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.324 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:51.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.324 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:51.324 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:51.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.324 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:51.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.324 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:51.324 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:51.324 EAL: Hugepages will be freed exactly as allocated. 00:03:51.324 EAL: No shared files mode enabled, IPC is disabled 00:03:51.324 EAL: No shared files mode enabled, IPC is disabled 00:03:51.324 EAL: TSC frequency is ~2290000 KHz 00:03:51.324 EAL: Main lcore 0 is ready (tid=7f4fb18e6a40;cpuset=[0]) 00:03:51.325 EAL: Trying to obtain current memory policy. 00:03:51.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.325 EAL: Restoring previous memory policy: 0 00:03:51.325 EAL: request: mp_malloc_sync 00:03:51.325 EAL: No shared files mode enabled, IPC is disabled 00:03:51.325 EAL: Heap on socket 0 was expanded by 2MB 00:03:51.325 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:51.325 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:51.325 EAL: Mem event callback 'spdk:(nil)' registered 00:03:51.325 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:51.325 00:03:51.325 00:03:51.325 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.325 http://cunit.sourceforge.net/ 00:03:51.325 00:03:51.325 00:03:51.325 Suite: components_suite 00:03:51.585 Test: vtophys_malloc_test ...passed 00:03:51.585 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.585 EAL: Restoring previous memory policy: 4 00:03:51.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.585 EAL: request: mp_malloc_sync 00:03:51.585 EAL: No shared files mode enabled, IPC is disabled 00:03:51.585 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.585 EAL: request: mp_malloc_sync 00:03:51.585 EAL: No shared files mode enabled, IPC is disabled 00:03:51.585 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.585 EAL: Trying to obtain current memory policy. 00:03:51.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.845 EAL: Restoring previous memory policy: 4 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.845 EAL: Trying to obtain current memory policy. 00:03:51.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.845 EAL: Restoring previous memory policy: 4 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.845 EAL: Trying to obtain current memory policy. 00:03:51.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.845 EAL: Restoring previous memory policy: 4 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.845 EAL: Trying to obtain current memory policy. 00:03:51.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.845 EAL: Restoring previous memory policy: 4 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.845 EAL: Trying to obtain current memory policy. 00:03:51.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.845 EAL: Restoring previous memory policy: 4 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was expanded by 66MB 00:03:52.105 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.105 EAL: request: mp_malloc_sync 00:03:52.105 EAL: No shared files mode enabled, IPC is disabled 00:03:52.105 EAL: Heap on socket 0 was shrunk by 66MB 00:03:52.105 EAL: Trying to obtain current memory policy. 00:03:52.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.105 EAL: Restoring previous memory policy: 4 00:03:52.105 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.105 EAL: request: mp_malloc_sync 00:03:52.105 EAL: No shared files mode enabled, IPC is disabled 00:03:52.105 EAL: Heap on socket 0 was expanded by 130MB 00:03:52.365 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.365 EAL: request: mp_malloc_sync 00:03:52.365 EAL: No shared files mode enabled, IPC is disabled 00:03:52.365 EAL: Heap on socket 0 was shrunk by 130MB 00:03:52.625 EAL: Trying to obtain current memory policy. 00:03:52.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.625 EAL: Restoring previous memory policy: 4 00:03:52.625 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.625 EAL: request: mp_malloc_sync 00:03:52.625 EAL: No shared files mode enabled, IPC is disabled 00:03:52.625 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.208 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.208 EAL: request: mp_malloc_sync 00:03:53.208 EAL: No shared files mode enabled, IPC is disabled 00:03:53.208 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.494 EAL: Trying to obtain current memory policy. 00:03:53.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.753 EAL: Restoring previous memory policy: 4 00:03:53.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.753 EAL: request: mp_malloc_sync 00:03:53.753 EAL: No shared files mode enabled, IPC is disabled 00:03:53.753 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.690 EAL: request: mp_malloc_sync 00:03:54.690 EAL: No shared files mode enabled, IPC is disabled 00:03:54.690 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.628 EAL: Trying to obtain current memory policy. 00:03:55.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.628 EAL: Restoring previous memory policy: 4 00:03:55.628 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.628 EAL: request: mp_malloc_sync 00:03:55.628 EAL: No shared files mode enabled, IPC is disabled 00:03:55.628 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.792 EAL: request: mp_malloc_sync 00:03:57.792 EAL: No shared files mode enabled, IPC is disabled 00:03:57.792 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.702 passed 00:03:59.702 00:03:59.702 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.702 suites 1 1 n/a 0 0 00:03:59.702 tests 2 2 2 0 0 00:03:59.702 asserts 5502 5502 5502 0 n/a 00:03:59.702 00:03:59.702 Elapsed time = 8.028 seconds 00:03:59.702 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.702 EAL: request: mp_malloc_sync 00:03:59.702 EAL: No shared files mode enabled, IPC is disabled 00:03:59.702 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.702 EAL: No shared files mode enabled, IPC is disabled 00:03:59.702 EAL: No shared files mode enabled, IPC is disabled 00:03:59.702 EAL: No shared files mode enabled, IPC is disabled 00:03:59.702 00:03:59.702 real 0m8.365s 00:03:59.702 user 0m7.423s 00:03:59.702 sys 0m0.774s 00:03:59.702 03:10:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.702 03:10:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:59.702 ************************************ 00:03:59.702 END TEST env_vtophys 00:03:59.702 ************************************ 00:03:59.702 03:10:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:59.702 03:10:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.702 03:10:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.702 03:10:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.702 ************************************ 00:03:59.702 START TEST env_pci 00:03:59.702 ************************************ 00:03:59.702 03:10:49 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:59.702 00:03:59.702 00:03:59.702 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.703 http://cunit.sourceforge.net/ 00:03:59.703 00:03:59.703 00:03:59.703 Suite: pci 00:03:59.703 Test: pci_hook ...[2024-11-20 03:10:49.098049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56694 has claimed it 00:03:59.703 passed 00:03:59.703 00:03:59.703 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.703 suites 1 1 n/a 0 0 00:03:59.703 tests 1 1 1 0 0 00:03:59.703 asserts 25 25 25 0 n/a 00:03:59.703 00:03:59.703 Elapsed time = 0.006 seconds 00:03:59.703 EAL: Cannot find device (10000:00:01.0) 00:03:59.703 EAL: Failed to attach device on primary process 00:03:59.703 00:03:59.703 real 0m0.093s 00:03:59.703 user 0m0.041s 00:03:59.703 sys 0m0.051s 00:03:59.703 03:10:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.703 03:10:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:59.703 ************************************ 00:03:59.703 END TEST env_pci 00:03:59.703 ************************************ 00:03:59.703 03:10:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.703 03:10:49 env -- env/env.sh@15 -- # uname 00:03:59.703 03:10:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.703 03:10:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.703 03:10:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.703 03:10:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:59.703 03:10:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.703 03:10:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.703 ************************************ 00:03:59.703 START TEST env_dpdk_post_init 00:03:59.703 ************************************ 00:03:59.703 03:10:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.703 EAL: Detected CPU lcores: 10 00:03:59.703 EAL: Detected NUMA nodes: 1 00:03:59.703 EAL: Detected shared linkage of DPDK 00:03:59.703 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.703 EAL: Selected IOVA mode 'PA' 00:03:59.964 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.964 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:59.964 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:59.964 Starting DPDK initialization... 00:03:59.964 Starting SPDK post initialization... 00:03:59.964 SPDK NVMe probe 00:03:59.964 Attaching to 0000:00:10.0 00:03:59.964 Attaching to 0000:00:11.0 00:03:59.964 Attached to 0000:00:10.0 00:03:59.964 Attached to 0000:00:11.0 00:03:59.964 Cleaning up... 00:03:59.964 00:03:59.964 real 0m0.285s 00:03:59.964 user 0m0.079s 00:03:59.964 sys 0m0.107s 00:03:59.964 03:10:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.964 03:10:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.964 ************************************ 00:03:59.964 END TEST env_dpdk_post_init 00:03:59.964 ************************************ 00:03:59.964 03:10:49 env -- env/env.sh@26 -- # uname 00:03:59.964 03:10:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.964 03:10:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.964 03:10:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.964 03:10:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.964 03:10:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.964 ************************************ 00:03:59.964 START TEST env_mem_callbacks 00:03:59.964 ************************************ 00:03:59.964 03:10:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.225 EAL: Detected CPU lcores: 10 00:04:00.225 EAL: Detected NUMA nodes: 1 00:04:00.225 EAL: Detected shared linkage of DPDK 00:04:00.225 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.225 EAL: Selected IOVA mode 'PA' 00:04:00.225 00:04:00.225 00:04:00.225 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.225 http://cunit.sourceforge.net/ 00:04:00.225 00:04:00.225 00:04:00.225 Suite: memory 00:04:00.225 Test: test ...TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.225 00:04:00.225 register 0x200000200000 2097152 00:04:00.225 malloc 3145728 00:04:00.225 register 0x200000400000 4194304 00:04:00.225 buf 0x2000004fffc0 len 3145728 PASSED 00:04:00.225 malloc 64 00:04:00.225 buf 0x2000004ffec0 len 64 PASSED 00:04:00.225 malloc 4194304 00:04:00.225 register 0x200000800000 6291456 00:04:00.225 buf 0x2000009fffc0 len 4194304 PASSED 00:04:00.225 free 0x2000004fffc0 3145728 00:04:00.225 free 0x2000004ffec0 64 00:04:00.225 unregister 0x200000400000 4194304 PASSED 00:04:00.225 free 0x2000009fffc0 4194304 00:04:00.225 unregister 0x200000800000 6291456 PASSED 00:04:00.225 malloc 8388608 00:04:00.225 register 0x200000400000 10485760 00:04:00.225 buf 0x2000005fffc0 len 8388608 PASSED 00:04:00.225 free 0x2000005fffc0 8388608 00:04:00.225 unregister 0x200000400000 10485760 PASSED 00:04:00.225 passed 00:04:00.225 00:04:00.225 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.225 suites 1 1 n/a 0 0 00:04:00.225 tests 1 1 1 0 0 00:04:00.225 asserts 15 15 15 0 n/a 00:04:00.225 00:04:00.225 Elapsed time = 0.089 seconds 00:04:00.485 00:04:00.485 real 0m0.293s 00:04:00.485 user 0m0.112s 00:04:00.485 sys 0m0.079s 00:04:00.485 03:10:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.485 03:10:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 END TEST env_mem_callbacks 00:04:00.485 ************************************ 00:04:00.485 00:04:00.485 real 0m9.856s 00:04:00.485 user 0m8.126s 00:04:00.485 sys 0m1.368s 00:04:00.485 03:10:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.485 03:10:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 END TEST env 00:04:00.485 ************************************ 00:04:00.485 03:10:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.485 03:10:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.485 03:10:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.485 03:10:49 -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 START TEST rpc 00:04:00.485 ************************************ 00:04:00.485 03:10:49 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.485 * Looking for test storage... 00:04:00.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.485 03:10:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.485 03:10:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.485 03:10:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.745 03:10:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.745 03:10:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.745 03:10:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.745 03:10:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.745 03:10:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.745 03:10:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.745 03:10:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.745 03:10:50 rpc -- scripts/common.sh@345 -- # : 1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.745 03:10:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.745 03:10:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.745 03:10:50 rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.745 03:10:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.745 03:10:50 rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.745 03:10:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.745 03:10:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.745 03:10:50 rpc -- scripts/common.sh@368 -- # return 0 00:04:00.745 03:10:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.745 03:10:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.745 --rc genhtml_branch_coverage=1 00:04:00.745 --rc genhtml_function_coverage=1 00:04:00.745 --rc genhtml_legend=1 00:04:00.745 --rc geninfo_all_blocks=1 00:04:00.745 --rc geninfo_unexecuted_blocks=1 00:04:00.745 00:04:00.745 ' 00:04:00.745 03:10:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.745 --rc genhtml_branch_coverage=1 00:04:00.745 --rc genhtml_function_coverage=1 00:04:00.745 --rc genhtml_legend=1 00:04:00.745 --rc geninfo_all_blocks=1 00:04:00.745 --rc geninfo_unexecuted_blocks=1 00:04:00.745 00:04:00.745 ' 00:04:00.745 03:10:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.745 --rc genhtml_branch_coverage=1 00:04:00.745 --rc genhtml_function_coverage=1 00:04:00.745 --rc genhtml_legend=1 00:04:00.745 --rc geninfo_all_blocks=1 00:04:00.745 --rc geninfo_unexecuted_blocks=1 00:04:00.745 00:04:00.745 ' 00:04:00.745 03:10:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.746 --rc genhtml_branch_coverage=1 00:04:00.746 --rc genhtml_function_coverage=1 00:04:00.746 --rc genhtml_legend=1 00:04:00.746 --rc geninfo_all_blocks=1 00:04:00.746 --rc geninfo_unexecuted_blocks=1 00:04:00.746 00:04:00.746 ' 00:04:00.746 03:10:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56821 00:04:00.746 03:10:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:00.746 03:10:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.746 03:10:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56821 00:04:00.746 03:10:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 56821 ']' 00:04:00.746 03:10:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.746 03:10:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.746 03:10:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.746 03:10:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.746 03:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.746 [2024-11-20 03:10:50.292016] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:00.746 [2024-11-20 03:10:50.292177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56821 ] 00:04:01.006 [2024-11-20 03:10:50.467301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.006 [2024-11-20 03:10:50.588008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:01.006 [2024-11-20 03:10:50.588077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56821' to capture a snapshot of events at runtime. 00:04:01.006 [2024-11-20 03:10:50.588087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:01.006 [2024-11-20 03:10:50.588097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:01.006 [2024-11-20 03:10:50.588122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56821 for offline analysis/debug. 00:04:01.006 [2024-11-20 03:10:50.589573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.945 03:10:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.945 03:10:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:01.945 03:10:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.945 03:10:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.945 03:10:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.945 03:10:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.945 03:10:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.945 03:10:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.945 03:10:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.945 ************************************ 00:04:01.945 START TEST rpc_integrity 00:04:01.945 ************************************ 00:04:01.945 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:01.945 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.945 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.945 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.945 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.945 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.945 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.206 { 00:04:02.206 "name": "Malloc0", 00:04:02.206 "aliases": [ 00:04:02.206 "430230bd-629d-477b-a01d-31675f91d17d" 00:04:02.206 ], 00:04:02.206 "product_name": "Malloc disk", 00:04:02.206 "block_size": 512, 00:04:02.206 "num_blocks": 16384, 00:04:02.206 "uuid": "430230bd-629d-477b-a01d-31675f91d17d", 00:04:02.206 "assigned_rate_limits": { 00:04:02.206 "rw_ios_per_sec": 0, 00:04:02.206 "rw_mbytes_per_sec": 0, 00:04:02.206 "r_mbytes_per_sec": 0, 00:04:02.206 "w_mbytes_per_sec": 0 00:04:02.206 }, 00:04:02.206 "claimed": false, 00:04:02.206 "zoned": false, 00:04:02.206 "supported_io_types": { 00:04:02.206 "read": true, 00:04:02.206 "write": true, 00:04:02.206 "unmap": true, 00:04:02.206 "flush": true, 00:04:02.206 "reset": true, 00:04:02.206 "nvme_admin": false, 00:04:02.206 "nvme_io": false, 00:04:02.206 "nvme_io_md": false, 00:04:02.206 "write_zeroes": true, 00:04:02.206 "zcopy": true, 00:04:02.206 "get_zone_info": false, 00:04:02.206 "zone_management": false, 00:04:02.206 "zone_append": false, 00:04:02.206 "compare": false, 00:04:02.206 "compare_and_write": false, 00:04:02.206 "abort": true, 00:04:02.206 "seek_hole": false, 00:04:02.206 "seek_data": false, 00:04:02.206 "copy": true, 00:04:02.206 "nvme_iov_md": false 00:04:02.206 }, 00:04:02.206 "memory_domains": [ 00:04:02.206 { 00:04:02.206 "dma_device_id": "system", 00:04:02.206 "dma_device_type": 1 00:04:02.206 }, 00:04:02.206 { 00:04:02.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.206 "dma_device_type": 2 00:04:02.206 } 00:04:02.206 ], 00:04:02.206 "driver_specific": {} 00:04:02.206 } 00:04:02.206 ]' 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.206 [2024-11-20 03:10:51.674251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:02.206 [2024-11-20 03:10:51.674379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.206 [2024-11-20 03:10:51.674425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:02.206 [2024-11-20 03:10:51.674443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.206 [2024-11-20 03:10:51.676805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.206 [2024-11-20 03:10:51.676850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.206 Passthru0 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.206 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.206 { 00:04:02.206 "name": "Malloc0", 00:04:02.206 "aliases": [ 00:04:02.206 "430230bd-629d-477b-a01d-31675f91d17d" 00:04:02.206 ], 00:04:02.206 "product_name": "Malloc disk", 00:04:02.206 "block_size": 512, 00:04:02.206 "num_blocks": 16384, 00:04:02.206 "uuid": "430230bd-629d-477b-a01d-31675f91d17d", 00:04:02.206 "assigned_rate_limits": { 00:04:02.206 "rw_ios_per_sec": 0, 00:04:02.206 "rw_mbytes_per_sec": 0, 00:04:02.206 "r_mbytes_per_sec": 0, 00:04:02.206 "w_mbytes_per_sec": 0 00:04:02.206 }, 00:04:02.206 "claimed": true, 00:04:02.206 "claim_type": "exclusive_write", 00:04:02.206 "zoned": false, 00:04:02.206 "supported_io_types": { 00:04:02.206 "read": true, 00:04:02.206 "write": true, 00:04:02.206 "unmap": true, 00:04:02.206 "flush": true, 00:04:02.206 "reset": true, 00:04:02.206 "nvme_admin": false, 00:04:02.206 "nvme_io": false, 00:04:02.206 "nvme_io_md": false, 00:04:02.206 "write_zeroes": true, 00:04:02.206 "zcopy": true, 00:04:02.206 "get_zone_info": false, 00:04:02.206 "zone_management": false, 00:04:02.206 "zone_append": false, 00:04:02.206 "compare": false, 00:04:02.206 "compare_and_write": false, 00:04:02.206 "abort": true, 00:04:02.206 "seek_hole": false, 00:04:02.206 "seek_data": false, 00:04:02.206 "copy": true, 00:04:02.206 "nvme_iov_md": false 00:04:02.206 }, 00:04:02.206 "memory_domains": [ 00:04:02.206 { 00:04:02.206 "dma_device_id": "system", 00:04:02.206 "dma_device_type": 1 00:04:02.206 }, 00:04:02.206 { 00:04:02.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.206 "dma_device_type": 2 00:04:02.206 } 00:04:02.206 ], 00:04:02.206 "driver_specific": {} 00:04:02.206 }, 00:04:02.206 { 00:04:02.206 "name": "Passthru0", 00:04:02.206 "aliases": [ 00:04:02.206 "5a5a70da-90fb-56da-a6ba-65fcd5d5442e" 00:04:02.206 ], 00:04:02.206 "product_name": "passthru", 00:04:02.206 "block_size": 512, 00:04:02.206 "num_blocks": 16384, 00:04:02.206 "uuid": "5a5a70da-90fb-56da-a6ba-65fcd5d5442e", 00:04:02.206 "assigned_rate_limits": { 00:04:02.206 "rw_ios_per_sec": 0, 00:04:02.206 "rw_mbytes_per_sec": 0, 00:04:02.206 "r_mbytes_per_sec": 0, 00:04:02.206 "w_mbytes_per_sec": 0 00:04:02.206 }, 00:04:02.206 "claimed": false, 00:04:02.206 "zoned": false, 00:04:02.206 "supported_io_types": { 00:04:02.206 "read": true, 00:04:02.206 "write": true, 00:04:02.206 "unmap": true, 00:04:02.206 "flush": true, 00:04:02.206 "reset": true, 00:04:02.206 "nvme_admin": false, 00:04:02.206 "nvme_io": false, 00:04:02.206 "nvme_io_md": false, 00:04:02.206 "write_zeroes": true, 00:04:02.206 "zcopy": true, 00:04:02.206 "get_zone_info": false, 00:04:02.206 "zone_management": false, 00:04:02.206 "zone_append": false, 00:04:02.206 "compare": false, 00:04:02.206 "compare_and_write": false, 00:04:02.206 "abort": true, 00:04:02.206 "seek_hole": false, 00:04:02.206 "seek_data": false, 00:04:02.206 "copy": true, 00:04:02.206 "nvme_iov_md": false 00:04:02.206 }, 00:04:02.206 "memory_domains": [ 00:04:02.206 { 00:04:02.206 "dma_device_id": "system", 00:04:02.206 "dma_device_type": 1 00:04:02.206 }, 00:04:02.206 { 00:04:02.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.206 "dma_device_type": 2 00:04:02.206 } 00:04:02.206 ], 00:04:02.206 "driver_specific": { 00:04:02.206 "passthru": { 00:04:02.206 "name": "Passthru0", 00:04:02.206 "base_bdev_name": "Malloc0" 00:04:02.206 } 00:04:02.206 } 00:04:02.206 } 00:04:02.206 ]' 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.206 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.207 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.207 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.207 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.207 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.207 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.207 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.467 ************************************ 00:04:02.467 END TEST rpc_integrity 00:04:02.467 ************************************ 00:04:02.468 03:10:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.468 00:04:02.468 real 0m0.351s 00:04:02.468 user 0m0.184s 00:04:02.468 sys 0m0.061s 00:04:02.468 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.468 03:10:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.468 03:10:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:02.468 03:10:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.468 03:10:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.468 03:10:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.468 ************************************ 00:04:02.468 START TEST rpc_plugins 00:04:02.468 ************************************ 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:02.468 03:10:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.468 03:10:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:02.468 03:10:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.468 03:10:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.468 03:10:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:02.468 { 00:04:02.468 "name": "Malloc1", 00:04:02.468 "aliases": [ 00:04:02.468 "5680c0d2-6ae9-45f2-a208-33ab31a0471d" 00:04:02.468 ], 00:04:02.468 "product_name": "Malloc disk", 00:04:02.468 "block_size": 4096, 00:04:02.468 "num_blocks": 256, 00:04:02.468 "uuid": "5680c0d2-6ae9-45f2-a208-33ab31a0471d", 00:04:02.468 "assigned_rate_limits": { 00:04:02.468 "rw_ios_per_sec": 0, 00:04:02.468 "rw_mbytes_per_sec": 0, 00:04:02.468 "r_mbytes_per_sec": 0, 00:04:02.468 "w_mbytes_per_sec": 0 00:04:02.468 }, 00:04:02.468 "claimed": false, 00:04:02.468 "zoned": false, 00:04:02.468 "supported_io_types": { 00:04:02.468 "read": true, 00:04:02.468 "write": true, 00:04:02.468 "unmap": true, 00:04:02.468 "flush": true, 00:04:02.468 "reset": true, 00:04:02.468 "nvme_admin": false, 00:04:02.468 "nvme_io": false, 00:04:02.468 "nvme_io_md": false, 00:04:02.468 "write_zeroes": true, 00:04:02.468 "zcopy": true, 00:04:02.468 "get_zone_info": false, 00:04:02.468 "zone_management": false, 00:04:02.468 "zone_append": false, 00:04:02.468 "compare": false, 00:04:02.468 "compare_and_write": false, 00:04:02.468 "abort": true, 00:04:02.468 "seek_hole": false, 00:04:02.468 "seek_data": false, 00:04:02.468 "copy": true, 00:04:02.468 "nvme_iov_md": false 00:04:02.468 }, 00:04:02.468 "memory_domains": [ 00:04:02.468 { 00:04:02.468 "dma_device_id": "system", 00:04:02.468 "dma_device_type": 1 00:04:02.468 }, 00:04:02.468 { 00:04:02.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.468 "dma_device_type": 2 00:04:02.468 } 00:04:02.468 ], 00:04:02.468 "driver_specific": {} 00:04:02.468 } 00:04:02.468 ]' 00:04:02.468 03:10:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:02.468 03:10:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:02.468 03:10:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.468 03:10:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.468 03:10:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:02.468 03:10:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:02.468 ************************************ 00:04:02.468 END TEST rpc_plugins 00:04:02.468 ************************************ 00:04:02.468 03:10:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:02.468 00:04:02.468 real 0m0.157s 00:04:02.468 user 0m0.084s 00:04:02.468 sys 0m0.024s 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.468 03:10:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.728 03:10:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:02.728 03:10:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.728 03:10:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.728 03:10:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.728 ************************************ 00:04:02.728 START TEST rpc_trace_cmd_test 00:04:02.728 ************************************ 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.728 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:02.728 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56821", 00:04:02.728 "tpoint_group_mask": "0x8", 00:04:02.728 "iscsi_conn": { 00:04:02.728 "mask": "0x2", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "scsi": { 00:04:02.728 "mask": "0x4", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "bdev": { 00:04:02.728 "mask": "0x8", 00:04:02.728 "tpoint_mask": "0xffffffffffffffff" 00:04:02.728 }, 00:04:02.728 "nvmf_rdma": { 00:04:02.728 "mask": "0x10", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "nvmf_tcp": { 00:04:02.728 "mask": "0x20", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "ftl": { 00:04:02.728 "mask": "0x40", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "blobfs": { 00:04:02.728 "mask": "0x80", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "dsa": { 00:04:02.728 "mask": "0x200", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "thread": { 00:04:02.728 "mask": "0x400", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "nvme_pcie": { 00:04:02.728 "mask": "0x800", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "iaa": { 00:04:02.728 "mask": "0x1000", 00:04:02.728 "tpoint_mask": "0x0" 00:04:02.728 }, 00:04:02.728 "nvme_tcp": { 00:04:02.729 "mask": "0x2000", 00:04:02.729 "tpoint_mask": "0x0" 00:04:02.729 }, 00:04:02.729 "bdev_nvme": { 00:04:02.729 "mask": "0x4000", 00:04:02.729 "tpoint_mask": "0x0" 00:04:02.729 }, 00:04:02.729 "sock": { 00:04:02.729 "mask": "0x8000", 00:04:02.729 "tpoint_mask": "0x0" 00:04:02.729 }, 00:04:02.729 "blob": { 00:04:02.729 "mask": "0x10000", 00:04:02.729 "tpoint_mask": "0x0" 00:04:02.729 }, 00:04:02.729 "bdev_raid": { 00:04:02.729 "mask": "0x20000", 00:04:02.729 "tpoint_mask": "0x0" 00:04:02.729 }, 00:04:02.729 "scheduler": { 00:04:02.729 "mask": "0x40000", 00:04:02.729 "tpoint_mask": "0x0" 00:04:02.729 } 00:04:02.729 }' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:02.729 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:02.989 ************************************ 00:04:02.989 END TEST rpc_trace_cmd_test 00:04:02.989 ************************************ 00:04:02.989 03:10:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:02.989 00:04:02.989 real 0m0.245s 00:04:02.989 user 0m0.203s 00:04:02.989 sys 0m0.032s 00:04:02.989 03:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.989 03:10:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.989 03:10:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:02.989 03:10:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:02.989 03:10:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:02.989 03:10:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.989 03:10:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.989 03:10:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.989 ************************************ 00:04:02.989 START TEST rpc_daemon_integrity 00:04:02.989 ************************************ 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.989 { 00:04:02.989 "name": "Malloc2", 00:04:02.989 "aliases": [ 00:04:02.989 "988b5ced-bc0a-43e8-8469-f652f029e2bd" 00:04:02.989 ], 00:04:02.989 "product_name": "Malloc disk", 00:04:02.989 "block_size": 512, 00:04:02.989 "num_blocks": 16384, 00:04:02.989 "uuid": "988b5ced-bc0a-43e8-8469-f652f029e2bd", 00:04:02.989 "assigned_rate_limits": { 00:04:02.989 "rw_ios_per_sec": 0, 00:04:02.989 "rw_mbytes_per_sec": 0, 00:04:02.989 "r_mbytes_per_sec": 0, 00:04:02.989 "w_mbytes_per_sec": 0 00:04:02.989 }, 00:04:02.989 "claimed": false, 00:04:02.989 "zoned": false, 00:04:02.989 "supported_io_types": { 00:04:02.989 "read": true, 00:04:02.989 "write": true, 00:04:02.989 "unmap": true, 00:04:02.989 "flush": true, 00:04:02.989 "reset": true, 00:04:02.989 "nvme_admin": false, 00:04:02.989 "nvme_io": false, 00:04:02.989 "nvme_io_md": false, 00:04:02.989 "write_zeroes": true, 00:04:02.989 "zcopy": true, 00:04:02.989 "get_zone_info": false, 00:04:02.989 "zone_management": false, 00:04:02.989 "zone_append": false, 00:04:02.989 "compare": false, 00:04:02.989 "compare_and_write": false, 00:04:02.989 "abort": true, 00:04:02.989 "seek_hole": false, 00:04:02.989 "seek_data": false, 00:04:02.989 "copy": true, 00:04:02.989 "nvme_iov_md": false 00:04:02.989 }, 00:04:02.989 "memory_domains": [ 00:04:02.989 { 00:04:02.989 "dma_device_id": "system", 00:04:02.989 "dma_device_type": 1 00:04:02.989 }, 00:04:02.989 { 00:04:02.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.989 "dma_device_type": 2 00:04:02.989 } 00:04:02.989 ], 00:04:02.989 "driver_specific": {} 00:04:02.989 } 00:04:02.989 ]' 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.989 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:02.990 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.990 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.990 [2024-11-20 03:10:52.602424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:02.990 [2024-11-20 03:10:52.602497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.990 [2024-11-20 03:10:52.602522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:02.990 [2024-11-20 03:10:52.602534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.990 [2024-11-20 03:10:52.605013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.990 [2024-11-20 03:10:52.605060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.990 Passthru0 00:04:02.990 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.990 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.990 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.990 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.249 { 00:04:03.249 "name": "Malloc2", 00:04:03.249 "aliases": [ 00:04:03.249 "988b5ced-bc0a-43e8-8469-f652f029e2bd" 00:04:03.249 ], 00:04:03.249 "product_name": "Malloc disk", 00:04:03.249 "block_size": 512, 00:04:03.249 "num_blocks": 16384, 00:04:03.249 "uuid": "988b5ced-bc0a-43e8-8469-f652f029e2bd", 00:04:03.249 "assigned_rate_limits": { 00:04:03.249 "rw_ios_per_sec": 0, 00:04:03.249 "rw_mbytes_per_sec": 0, 00:04:03.249 "r_mbytes_per_sec": 0, 00:04:03.249 "w_mbytes_per_sec": 0 00:04:03.249 }, 00:04:03.249 "claimed": true, 00:04:03.249 "claim_type": "exclusive_write", 00:04:03.249 "zoned": false, 00:04:03.249 "supported_io_types": { 00:04:03.249 "read": true, 00:04:03.249 "write": true, 00:04:03.249 "unmap": true, 00:04:03.249 "flush": true, 00:04:03.249 "reset": true, 00:04:03.249 "nvme_admin": false, 00:04:03.249 "nvme_io": false, 00:04:03.249 "nvme_io_md": false, 00:04:03.249 "write_zeroes": true, 00:04:03.249 "zcopy": true, 00:04:03.249 "get_zone_info": false, 00:04:03.249 "zone_management": false, 00:04:03.249 "zone_append": false, 00:04:03.249 "compare": false, 00:04:03.249 "compare_and_write": false, 00:04:03.249 "abort": true, 00:04:03.249 "seek_hole": false, 00:04:03.249 "seek_data": false, 00:04:03.249 "copy": true, 00:04:03.249 "nvme_iov_md": false 00:04:03.249 }, 00:04:03.249 "memory_domains": [ 00:04:03.249 { 00:04:03.249 "dma_device_id": "system", 00:04:03.249 "dma_device_type": 1 00:04:03.249 }, 00:04:03.249 { 00:04:03.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.249 "dma_device_type": 2 00:04:03.249 } 00:04:03.249 ], 00:04:03.249 "driver_specific": {} 00:04:03.249 }, 00:04:03.249 { 00:04:03.249 "name": "Passthru0", 00:04:03.249 "aliases": [ 00:04:03.249 "45033a33-d56a-50cb-98db-32058bb9aa73" 00:04:03.249 ], 00:04:03.249 "product_name": "passthru", 00:04:03.249 "block_size": 512, 00:04:03.249 "num_blocks": 16384, 00:04:03.249 "uuid": "45033a33-d56a-50cb-98db-32058bb9aa73", 00:04:03.249 "assigned_rate_limits": { 00:04:03.249 "rw_ios_per_sec": 0, 00:04:03.249 "rw_mbytes_per_sec": 0, 00:04:03.249 "r_mbytes_per_sec": 0, 00:04:03.249 "w_mbytes_per_sec": 0 00:04:03.249 }, 00:04:03.249 "claimed": false, 00:04:03.249 "zoned": false, 00:04:03.249 "supported_io_types": { 00:04:03.249 "read": true, 00:04:03.249 "write": true, 00:04:03.249 "unmap": true, 00:04:03.249 "flush": true, 00:04:03.249 "reset": true, 00:04:03.249 "nvme_admin": false, 00:04:03.249 "nvme_io": false, 00:04:03.249 "nvme_io_md": false, 00:04:03.249 "write_zeroes": true, 00:04:03.249 "zcopy": true, 00:04:03.249 "get_zone_info": false, 00:04:03.249 "zone_management": false, 00:04:03.249 "zone_append": false, 00:04:03.249 "compare": false, 00:04:03.249 "compare_and_write": false, 00:04:03.249 "abort": true, 00:04:03.249 "seek_hole": false, 00:04:03.249 "seek_data": false, 00:04:03.249 "copy": true, 00:04:03.249 "nvme_iov_md": false 00:04:03.249 }, 00:04:03.249 "memory_domains": [ 00:04:03.249 { 00:04:03.249 "dma_device_id": "system", 00:04:03.249 "dma_device_type": 1 00:04:03.249 }, 00:04:03.249 { 00:04:03.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.249 "dma_device_type": 2 00:04:03.249 } 00:04:03.249 ], 00:04:03.249 "driver_specific": { 00:04:03.249 "passthru": { 00:04:03.249 "name": "Passthru0", 00:04:03.249 "base_bdev_name": "Malloc2" 00:04:03.249 } 00:04:03.249 } 00:04:03.249 } 00:04:03.249 ]' 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.249 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.250 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.250 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.250 03:10:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.250 00:04:03.250 real 0m0.345s 00:04:03.250 user 0m0.196s 00:04:03.250 sys 0m0.048s 00:04:03.250 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.250 03:10:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.250 ************************************ 00:04:03.250 END TEST rpc_daemon_integrity 00:04:03.250 ************************************ 00:04:03.250 03:10:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.250 03:10:52 rpc -- rpc/rpc.sh@84 -- # killprocess 56821 00:04:03.250 03:10:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 56821 ']' 00:04:03.250 03:10:52 rpc -- common/autotest_common.sh@958 -- # kill -0 56821 00:04:03.250 03:10:52 rpc -- common/autotest_common.sh@959 -- # uname 00:04:03.250 03:10:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.250 03:10:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56821 00:04:03.547 killing process with pid 56821 00:04:03.547 03:10:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.547 03:10:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.547 03:10:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56821' 00:04:03.547 03:10:52 rpc -- common/autotest_common.sh@973 -- # kill 56821 00:04:03.547 03:10:52 rpc -- common/autotest_common.sh@978 -- # wait 56821 00:04:06.088 00:04:06.088 real 0m5.376s 00:04:06.088 user 0m5.934s 00:04:06.088 sys 0m0.911s 00:04:06.088 03:10:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.088 03:10:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.088 ************************************ 00:04:06.088 END TEST rpc 00:04:06.088 ************************************ 00:04:06.088 03:10:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:06.088 03:10:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.088 03:10:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.088 03:10:55 -- common/autotest_common.sh@10 -- # set +x 00:04:06.088 ************************************ 00:04:06.088 START TEST skip_rpc 00:04:06.088 ************************************ 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:06.088 * Looking for test storage... 00:04:06.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.088 03:10:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.088 --rc genhtml_branch_coverage=1 00:04:06.088 --rc genhtml_function_coverage=1 00:04:06.088 --rc genhtml_legend=1 00:04:06.088 --rc geninfo_all_blocks=1 00:04:06.088 --rc geninfo_unexecuted_blocks=1 00:04:06.088 00:04:06.088 ' 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.088 --rc genhtml_branch_coverage=1 00:04:06.088 --rc genhtml_function_coverage=1 00:04:06.088 --rc genhtml_legend=1 00:04:06.088 --rc geninfo_all_blocks=1 00:04:06.088 --rc geninfo_unexecuted_blocks=1 00:04:06.088 00:04:06.088 ' 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.088 --rc genhtml_branch_coverage=1 00:04:06.088 --rc genhtml_function_coverage=1 00:04:06.088 --rc genhtml_legend=1 00:04:06.088 --rc geninfo_all_blocks=1 00:04:06.088 --rc geninfo_unexecuted_blocks=1 00:04:06.088 00:04:06.088 ' 00:04:06.088 03:10:55 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.089 --rc genhtml_branch_coverage=1 00:04:06.089 --rc genhtml_function_coverage=1 00:04:06.089 --rc genhtml_legend=1 00:04:06.089 --rc geninfo_all_blocks=1 00:04:06.089 --rc geninfo_unexecuted_blocks=1 00:04:06.089 00:04:06.089 ' 00:04:06.089 03:10:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.089 03:10:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:06.089 03:10:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.089 03:10:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.089 03:10:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.089 03:10:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.089 ************************************ 00:04:06.089 START TEST skip_rpc 00:04:06.089 ************************************ 00:04:06.089 03:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:06.089 03:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57056 00:04:06.089 03:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.089 03:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.089 03:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.089 [2024-11-20 03:10:55.717142] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:06.089 [2024-11-20 03:10:55.717345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57056 ] 00:04:06.349 [2024-11-20 03:10:55.890014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.610 [2024-11-20 03:10:56.005022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57056 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57056 ']' 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57056 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57056 00:04:11.880 killing process with pid 57056 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57056' 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57056 00:04:11.880 03:11:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57056 00:04:13.786 00:04:13.786 real 0m7.526s 00:04:13.786 user 0m7.064s 00:04:13.786 sys 0m0.366s 00:04:13.786 03:11:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.786 ************************************ 00:04:13.786 END TEST skip_rpc 00:04:13.786 ************************************ 00:04:13.786 03:11:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.786 03:11:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.786 03:11:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.786 03:11:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.786 03:11:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.786 ************************************ 00:04:13.786 START TEST skip_rpc_with_json 00:04:13.786 ************************************ 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57160 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57160 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57160 ']' 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.786 03:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.786 [2024-11-20 03:11:03.305760] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:13.786 [2024-11-20 03:11:03.305971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57160 ] 00:04:14.045 [2024-11-20 03:11:03.481182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.045 [2024-11-20 03:11:03.606043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.992 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.992 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.993 [2024-11-20 03:11:04.480153] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.993 request: 00:04:14.993 { 00:04:14.993 "trtype": "tcp", 00:04:14.993 "method": "nvmf_get_transports", 00:04:14.993 "req_id": 1 00:04:14.993 } 00:04:14.993 Got JSON-RPC error response 00:04:14.993 response: 00:04:14.993 { 00:04:14.993 "code": -19, 00:04:14.993 "message": "No such device" 00:04:14.993 } 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.993 [2024-11-20 03:11:04.492236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.993 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.267 03:11:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.267 { 00:04:15.267 "subsystems": [ 00:04:15.267 { 00:04:15.267 "subsystem": "fsdev", 00:04:15.267 "config": [ 00:04:15.267 { 00:04:15.267 "method": "fsdev_set_opts", 00:04:15.267 "params": { 00:04:15.267 "fsdev_io_pool_size": 65535, 00:04:15.267 "fsdev_io_cache_size": 256 00:04:15.267 } 00:04:15.267 } 00:04:15.267 ] 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "subsystem": "keyring", 00:04:15.267 "config": [] 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "subsystem": "iobuf", 00:04:15.267 "config": [ 00:04:15.267 { 00:04:15.267 "method": "iobuf_set_options", 00:04:15.267 "params": { 00:04:15.267 "small_pool_count": 8192, 00:04:15.267 "large_pool_count": 1024, 00:04:15.267 "small_bufsize": 8192, 00:04:15.267 "large_bufsize": 135168, 00:04:15.267 "enable_numa": false 00:04:15.267 } 00:04:15.267 } 00:04:15.267 ] 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "subsystem": "sock", 00:04:15.267 "config": [ 00:04:15.267 { 00:04:15.267 "method": "sock_set_default_impl", 00:04:15.267 "params": { 00:04:15.267 "impl_name": "posix" 00:04:15.267 } 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "method": "sock_impl_set_options", 00:04:15.267 "params": { 00:04:15.267 "impl_name": "ssl", 00:04:15.267 "recv_buf_size": 4096, 00:04:15.267 "send_buf_size": 4096, 00:04:15.267 "enable_recv_pipe": true, 00:04:15.267 "enable_quickack": false, 00:04:15.267 "enable_placement_id": 0, 00:04:15.267 "enable_zerocopy_send_server": true, 00:04:15.267 "enable_zerocopy_send_client": false, 00:04:15.267 "zerocopy_threshold": 0, 00:04:15.267 "tls_version": 0, 00:04:15.267 "enable_ktls": false 00:04:15.267 } 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "method": "sock_impl_set_options", 00:04:15.267 "params": { 00:04:15.267 "impl_name": "posix", 00:04:15.267 "recv_buf_size": 2097152, 00:04:15.267 "send_buf_size": 2097152, 00:04:15.267 "enable_recv_pipe": true, 00:04:15.267 "enable_quickack": false, 00:04:15.267 "enable_placement_id": 0, 00:04:15.267 "enable_zerocopy_send_server": true, 00:04:15.267 "enable_zerocopy_send_client": false, 00:04:15.267 "zerocopy_threshold": 0, 00:04:15.267 "tls_version": 0, 00:04:15.267 "enable_ktls": false 00:04:15.267 } 00:04:15.267 } 00:04:15.267 ] 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "subsystem": "vmd", 00:04:15.267 "config": [] 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "subsystem": "accel", 00:04:15.267 "config": [ 00:04:15.267 { 00:04:15.267 "method": "accel_set_options", 00:04:15.267 "params": { 00:04:15.267 "small_cache_size": 128, 00:04:15.268 "large_cache_size": 16, 00:04:15.268 "task_count": 2048, 00:04:15.268 "sequence_count": 2048, 00:04:15.268 "buf_count": 2048 00:04:15.268 } 00:04:15.268 } 00:04:15.268 ] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "bdev", 00:04:15.268 "config": [ 00:04:15.268 { 00:04:15.268 "method": "bdev_set_options", 00:04:15.268 "params": { 00:04:15.268 "bdev_io_pool_size": 65535, 00:04:15.268 "bdev_io_cache_size": 256, 00:04:15.268 "bdev_auto_examine": true, 00:04:15.268 "iobuf_small_cache_size": 128, 00:04:15.268 "iobuf_large_cache_size": 16 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "bdev_raid_set_options", 00:04:15.268 "params": { 00:04:15.268 "process_window_size_kb": 1024, 00:04:15.268 "process_max_bandwidth_mb_sec": 0 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "bdev_iscsi_set_options", 00:04:15.268 "params": { 00:04:15.268 "timeout_sec": 30 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "bdev_nvme_set_options", 00:04:15.268 "params": { 00:04:15.268 "action_on_timeout": "none", 00:04:15.268 "timeout_us": 0, 00:04:15.268 "timeout_admin_us": 0, 00:04:15.268 "keep_alive_timeout_ms": 10000, 00:04:15.268 "arbitration_burst": 0, 00:04:15.268 "low_priority_weight": 0, 00:04:15.268 "medium_priority_weight": 0, 00:04:15.268 "high_priority_weight": 0, 00:04:15.268 "nvme_adminq_poll_period_us": 10000, 00:04:15.268 "nvme_ioq_poll_period_us": 0, 00:04:15.268 "io_queue_requests": 0, 00:04:15.268 "delay_cmd_submit": true, 00:04:15.268 "transport_retry_count": 4, 00:04:15.268 "bdev_retry_count": 3, 00:04:15.268 "transport_ack_timeout": 0, 00:04:15.268 "ctrlr_loss_timeout_sec": 0, 00:04:15.268 "reconnect_delay_sec": 0, 00:04:15.268 "fast_io_fail_timeout_sec": 0, 00:04:15.268 "disable_auto_failback": false, 00:04:15.268 "generate_uuids": false, 00:04:15.268 "transport_tos": 0, 00:04:15.268 "nvme_error_stat": false, 00:04:15.268 "rdma_srq_size": 0, 00:04:15.268 "io_path_stat": false, 00:04:15.268 "allow_accel_sequence": false, 00:04:15.268 "rdma_max_cq_size": 0, 00:04:15.268 "rdma_cm_event_timeout_ms": 0, 00:04:15.268 "dhchap_digests": [ 00:04:15.268 "sha256", 00:04:15.268 "sha384", 00:04:15.268 "sha512" 00:04:15.268 ], 00:04:15.268 "dhchap_dhgroups": [ 00:04:15.268 "null", 00:04:15.268 "ffdhe2048", 00:04:15.268 "ffdhe3072", 00:04:15.268 "ffdhe4096", 00:04:15.268 "ffdhe6144", 00:04:15.268 "ffdhe8192" 00:04:15.268 ] 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "bdev_nvme_set_hotplug", 00:04:15.268 "params": { 00:04:15.268 "period_us": 100000, 00:04:15.268 "enable": false 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "bdev_wait_for_examine" 00:04:15.268 } 00:04:15.268 ] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "scsi", 00:04:15.268 "config": null 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "scheduler", 00:04:15.268 "config": [ 00:04:15.268 { 00:04:15.268 "method": "framework_set_scheduler", 00:04:15.268 "params": { 00:04:15.268 "name": "static" 00:04:15.268 } 00:04:15.268 } 00:04:15.268 ] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "vhost_scsi", 00:04:15.268 "config": [] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "vhost_blk", 00:04:15.268 "config": [] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "ublk", 00:04:15.268 "config": [] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "nbd", 00:04:15.268 "config": [] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "nvmf", 00:04:15.268 "config": [ 00:04:15.268 { 00:04:15.268 "method": "nvmf_set_config", 00:04:15.268 "params": { 00:04:15.268 "discovery_filter": "match_any", 00:04:15.268 "admin_cmd_passthru": { 00:04:15.268 "identify_ctrlr": false 00:04:15.268 }, 00:04:15.268 "dhchap_digests": [ 00:04:15.268 "sha256", 00:04:15.268 "sha384", 00:04:15.268 "sha512" 00:04:15.268 ], 00:04:15.268 "dhchap_dhgroups": [ 00:04:15.268 "null", 00:04:15.268 "ffdhe2048", 00:04:15.268 "ffdhe3072", 00:04:15.268 "ffdhe4096", 00:04:15.268 "ffdhe6144", 00:04:15.268 "ffdhe8192" 00:04:15.268 ] 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "nvmf_set_max_subsystems", 00:04:15.268 "params": { 00:04:15.268 "max_subsystems": 1024 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "nvmf_set_crdt", 00:04:15.268 "params": { 00:04:15.268 "crdt1": 0, 00:04:15.268 "crdt2": 0, 00:04:15.268 "crdt3": 0 00:04:15.268 } 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "method": "nvmf_create_transport", 00:04:15.268 "params": { 00:04:15.268 "trtype": "TCP", 00:04:15.268 "max_queue_depth": 128, 00:04:15.268 "max_io_qpairs_per_ctrlr": 127, 00:04:15.268 "in_capsule_data_size": 4096, 00:04:15.268 "max_io_size": 131072, 00:04:15.268 "io_unit_size": 131072, 00:04:15.268 "max_aq_depth": 128, 00:04:15.268 "num_shared_buffers": 511, 00:04:15.268 "buf_cache_size": 4294967295, 00:04:15.268 "dif_insert_or_strip": false, 00:04:15.268 "zcopy": false, 00:04:15.268 "c2h_success": true, 00:04:15.268 "sock_priority": 0, 00:04:15.268 "abort_timeout_sec": 1, 00:04:15.268 "ack_timeout": 0, 00:04:15.268 "data_wr_pool_size": 0 00:04:15.268 } 00:04:15.268 } 00:04:15.268 ] 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "subsystem": "iscsi", 00:04:15.268 "config": [ 00:04:15.268 { 00:04:15.268 "method": "iscsi_set_options", 00:04:15.268 "params": { 00:04:15.268 "node_base": "iqn.2016-06.io.spdk", 00:04:15.268 "max_sessions": 128, 00:04:15.268 "max_connections_per_session": 2, 00:04:15.268 "max_queue_depth": 64, 00:04:15.268 "default_time2wait": 2, 00:04:15.268 "default_time2retain": 20, 00:04:15.268 "first_burst_length": 8192, 00:04:15.268 "immediate_data": true, 00:04:15.268 "allow_duplicated_isid": false, 00:04:15.269 "error_recovery_level": 0, 00:04:15.269 "nop_timeout": 60, 00:04:15.269 "nop_in_interval": 30, 00:04:15.269 "disable_chap": false, 00:04:15.269 "require_chap": false, 00:04:15.269 "mutual_chap": false, 00:04:15.269 "chap_group": 0, 00:04:15.269 "max_large_datain_per_connection": 64, 00:04:15.269 "max_r2t_per_connection": 4, 00:04:15.269 "pdu_pool_size": 36864, 00:04:15.269 "immediate_data_pool_size": 16384, 00:04:15.269 "data_out_pool_size": 2048 00:04:15.269 } 00:04:15.269 } 00:04:15.269 ] 00:04:15.269 } 00:04:15.269 ] 00:04:15.269 } 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57160 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57160 ']' 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57160 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57160 00:04:15.269 killing process with pid 57160 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57160' 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57160 00:04:15.269 03:11:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57160 00:04:17.810 03:11:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57216 00:04:17.810 03:11:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.810 03:11:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57216 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57216 ']' 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57216 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57216 00:04:23.090 killing process with pid 57216 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57216' 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57216 00:04:23.090 03:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57216 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.997 ************************************ 00:04:24.997 END TEST skip_rpc_with_json 00:04:24.997 ************************************ 00:04:24.997 00:04:24.997 real 0m11.191s 00:04:24.997 user 0m10.720s 00:04:24.997 sys 0m0.818s 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.997 03:11:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:24.997 03:11:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.997 03:11:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.997 03:11:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.997 ************************************ 00:04:24.997 START TEST skip_rpc_with_delay 00:04:24.997 ************************************ 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.997 [2024-11-20 03:11:14.563230] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.997 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.256 ************************************ 00:04:25.256 END TEST skip_rpc_with_delay 00:04:25.256 ************************************ 00:04:25.256 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.256 00:04:25.256 real 0m0.171s 00:04:25.256 user 0m0.092s 00:04:25.256 sys 0m0.078s 00:04:25.256 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.256 03:11:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.256 03:11:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.256 03:11:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.256 03:11:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.256 03:11:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.256 03:11:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.256 03:11:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.256 ************************************ 00:04:25.256 START TEST exit_on_failed_rpc_init 00:04:25.256 ************************************ 00:04:25.256 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:25.256 03:11:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57344 00:04:25.256 03:11:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.256 03:11:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57344 00:04:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.256 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57344 ']' 00:04:25.257 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.257 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.257 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.257 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.257 03:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.257 [2024-11-20 03:11:14.803918] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:25.257 [2024-11-20 03:11:14.804174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57344 ] 00:04:25.516 [2024-11-20 03:11:14.973206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.516 [2024-11-20 03:11:15.087255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.455 03:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.455 [2024-11-20 03:11:16.016336] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:26.455 [2024-11-20 03:11:16.016527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57362 ] 00:04:26.714 [2024-11-20 03:11:16.193008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.714 [2024-11-20 03:11:16.306161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.714 [2024-11-20 03:11:16.306332] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:26.714 [2024-11-20 03:11:16.306380] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:26.714 [2024-11-20 03:11:16.306418] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57344 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57344 ']' 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57344 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.973 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57344 00:04:27.233 killing process with pid 57344 00:04:27.233 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.233 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.233 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57344' 00:04:27.233 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57344 00:04:27.233 03:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57344 00:04:29.769 00:04:29.769 real 0m4.259s 00:04:29.769 user 0m4.603s 00:04:29.769 sys 0m0.542s 00:04:29.769 03:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.769 ************************************ 00:04:29.769 END TEST exit_on_failed_rpc_init 00:04:29.769 ************************************ 00:04:29.769 03:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 03:11:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.769 00:04:29.769 real 0m23.620s 00:04:29.769 user 0m22.682s 00:04:29.769 sys 0m2.088s 00:04:29.769 03:11:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.769 03:11:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 ************************************ 00:04:29.769 END TEST skip_rpc 00:04:29.769 ************************************ 00:04:29.769 03:11:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.769 03:11:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.769 03:11:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.769 03:11:19 -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 ************************************ 00:04:29.769 START TEST rpc_client 00:04:29.769 ************************************ 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.769 * Looking for test storage... 00:04:29.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.769 03:11:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.769 --rc genhtml_branch_coverage=1 00:04:29.769 --rc genhtml_function_coverage=1 00:04:29.769 --rc genhtml_legend=1 00:04:29.769 --rc geninfo_all_blocks=1 00:04:29.769 --rc geninfo_unexecuted_blocks=1 00:04:29.769 00:04:29.769 ' 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.769 --rc genhtml_branch_coverage=1 00:04:29.769 --rc genhtml_function_coverage=1 00:04:29.769 --rc genhtml_legend=1 00:04:29.769 --rc geninfo_all_blocks=1 00:04:29.769 --rc geninfo_unexecuted_blocks=1 00:04:29.769 00:04:29.769 ' 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.769 --rc genhtml_branch_coverage=1 00:04:29.769 --rc genhtml_function_coverage=1 00:04:29.769 --rc genhtml_legend=1 00:04:29.769 --rc geninfo_all_blocks=1 00:04:29.769 --rc geninfo_unexecuted_blocks=1 00:04:29.769 00:04:29.769 ' 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.769 --rc genhtml_branch_coverage=1 00:04:29.769 --rc genhtml_function_coverage=1 00:04:29.769 --rc genhtml_legend=1 00:04:29.769 --rc geninfo_all_blocks=1 00:04:29.769 --rc geninfo_unexecuted_blocks=1 00:04:29.769 00:04:29.769 ' 00:04:29.769 03:11:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:29.769 OK 00:04:29.769 03:11:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.769 00:04:29.769 real 0m0.298s 00:04:29.769 user 0m0.182s 00:04:29.769 sys 0m0.133s 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.769 03:11:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.769 ************************************ 00:04:29.769 END TEST rpc_client 00:04:29.769 ************************************ 00:04:30.028 03:11:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.028 03:11:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.028 03:11:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.028 03:11:19 -- common/autotest_common.sh@10 -- # set +x 00:04:30.028 ************************************ 00:04:30.028 START TEST json_config 00:04:30.028 ************************************ 00:04:30.028 03:11:19 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.028 03:11:19 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.028 03:11:19 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.029 03:11:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.029 03:11:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.029 03:11:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.029 03:11:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.029 03:11:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.029 03:11:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:30.029 03:11:19 json_config -- scripts/common.sh@345 -- # : 1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.029 03:11:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.029 03:11:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@353 -- # local d=1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.029 03:11:19 json_config -- scripts/common.sh@355 -- # echo 1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.029 03:11:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@353 -- # local d=2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.029 03:11:19 json_config -- scripts/common.sh@355 -- # echo 2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.029 03:11:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.029 03:11:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.029 03:11:19 json_config -- scripts/common.sh@368 -- # return 0 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.029 --rc genhtml_branch_coverage=1 00:04:30.029 --rc genhtml_function_coverage=1 00:04:30.029 --rc genhtml_legend=1 00:04:30.029 --rc geninfo_all_blocks=1 00:04:30.029 --rc geninfo_unexecuted_blocks=1 00:04:30.029 00:04:30.029 ' 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.029 --rc genhtml_branch_coverage=1 00:04:30.029 --rc genhtml_function_coverage=1 00:04:30.029 --rc genhtml_legend=1 00:04:30.029 --rc geninfo_all_blocks=1 00:04:30.029 --rc geninfo_unexecuted_blocks=1 00:04:30.029 00:04:30.029 ' 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.029 --rc genhtml_branch_coverage=1 00:04:30.029 --rc genhtml_function_coverage=1 00:04:30.029 --rc genhtml_legend=1 00:04:30.029 --rc geninfo_all_blocks=1 00:04:30.029 --rc geninfo_unexecuted_blocks=1 00:04:30.029 00:04:30.029 ' 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.029 --rc genhtml_branch_coverage=1 00:04:30.029 --rc genhtml_function_coverage=1 00:04:30.029 --rc genhtml_legend=1 00:04:30.029 --rc geninfo_all_blocks=1 00:04:30.029 --rc geninfo_unexecuted_blocks=1 00:04:30.029 00:04:30.029 ' 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd400a4-2768-44b2-aa0b-0edb23284369 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1dd400a4-2768-44b2-aa0b-0edb23284369 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.029 03:11:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.029 03:11:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.029 03:11:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.029 03:11:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.029 03:11:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.029 03:11:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.029 03:11:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.029 03:11:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.029 03:11:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@51 -- # : 0 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.029 03:11:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.029 WARNING: No tests are enabled so not running JSON configuration tests 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:30.029 03:11:19 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:30.029 00:04:30.029 real 0m0.223s 00:04:30.029 user 0m0.134s 00:04:30.029 sys 0m0.096s 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.029 03:11:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.029 ************************************ 00:04:30.029 END TEST json_config 00:04:30.029 ************************************ 00:04:30.288 03:11:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:30.288 03:11:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.288 03:11:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.288 03:11:19 -- common/autotest_common.sh@10 -- # set +x 00:04:30.288 ************************************ 00:04:30.288 START TEST json_config_extra_key 00:04:30.288 ************************************ 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.288 03:11:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.288 --rc genhtml_branch_coverage=1 00:04:30.288 --rc genhtml_function_coverage=1 00:04:30.288 --rc genhtml_legend=1 00:04:30.288 --rc geninfo_all_blocks=1 00:04:30.288 --rc geninfo_unexecuted_blocks=1 00:04:30.288 00:04:30.288 ' 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.288 --rc genhtml_branch_coverage=1 00:04:30.288 --rc genhtml_function_coverage=1 00:04:30.288 --rc genhtml_legend=1 00:04:30.288 --rc geninfo_all_blocks=1 00:04:30.288 --rc geninfo_unexecuted_blocks=1 00:04:30.288 00:04:30.288 ' 00:04:30.288 03:11:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.288 --rc genhtml_branch_coverage=1 00:04:30.288 --rc genhtml_function_coverage=1 00:04:30.289 --rc genhtml_legend=1 00:04:30.289 --rc geninfo_all_blocks=1 00:04:30.289 --rc geninfo_unexecuted_blocks=1 00:04:30.289 00:04:30.289 ' 00:04:30.289 03:11:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.289 --rc genhtml_branch_coverage=1 00:04:30.289 --rc genhtml_function_coverage=1 00:04:30.289 --rc genhtml_legend=1 00:04:30.289 --rc geninfo_all_blocks=1 00:04:30.289 --rc geninfo_unexecuted_blocks=1 00:04:30.289 00:04:30.289 ' 00:04:30.289 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dd400a4-2768-44b2-aa0b-0edb23284369 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1dd400a4-2768-44b2-aa0b-0edb23284369 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.289 03:11:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.549 03:11:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.549 03:11:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.549 03:11:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.549 03:11:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.549 03:11:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 03:11:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 03:11:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 03:11:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:30.549 03:11:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.549 03:11:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.549 INFO: launching applications... 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:30.549 03:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57572 00:04:30.549 Waiting for target to run... 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57572 /var/tmp/spdk_tgt.sock 00:04:30.549 03:11:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:30.549 03:11:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57572 ']' 00:04:30.550 03:11:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.550 03:11:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.550 03:11:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.550 03:11:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.550 03:11:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.550 [2024-11-20 03:11:20.033600] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:30.550 [2024-11-20 03:11:20.033727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57572 ] 00:04:30.809 [2024-11-20 03:11:20.414203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.069 [2024-11-20 03:11:20.514655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.638 03:11:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.638 03:11:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:31.638 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:31.638 INFO: shutting down applications... 00:04:31.638 03:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:31.638 03:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57572 ]] 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57572 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:31.638 03:11:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.207 03:11:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.207 03:11:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.207 03:11:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:32.207 03:11:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.810 03:11:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.810 03:11:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.810 03:11:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:32.810 03:11:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.421 03:11:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.421 03:11:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.421 03:11:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:33.421 03:11:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.681 03:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.681 03:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.681 03:11:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:33.681 03:11:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.250 03:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.250 03:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.250 03:11:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:34.250 03:11:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57572 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.820 SPDK target shutdown done 00:04:34.820 03:11:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.820 Success 00:04:34.820 03:11:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.820 00:04:34.820 real 0m4.531s 00:04:34.820 user 0m4.019s 00:04:34.820 sys 0m0.519s 00:04:34.820 03:11:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.820 03:11:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 ************************************ 00:04:34.820 END TEST json_config_extra_key 00:04:34.820 ************************************ 00:04:34.820 03:11:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.820 03:11:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.820 03:11:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.820 03:11:24 -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 ************************************ 00:04:34.820 START TEST alias_rpc 00:04:34.820 ************************************ 00:04:34.820 03:11:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.820 * Looking for test storage... 00:04:34.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:34.820 03:11:24 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.820 03:11:24 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.820 03:11:24 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.080 03:11:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.080 --rc genhtml_branch_coverage=1 00:04:35.080 --rc genhtml_function_coverage=1 00:04:35.080 --rc genhtml_legend=1 00:04:35.080 --rc geninfo_all_blocks=1 00:04:35.080 --rc geninfo_unexecuted_blocks=1 00:04:35.080 00:04:35.080 ' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.080 --rc genhtml_branch_coverage=1 00:04:35.080 --rc genhtml_function_coverage=1 00:04:35.080 --rc genhtml_legend=1 00:04:35.080 --rc geninfo_all_blocks=1 00:04:35.080 --rc geninfo_unexecuted_blocks=1 00:04:35.080 00:04:35.080 ' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.080 --rc genhtml_branch_coverage=1 00:04:35.080 --rc genhtml_function_coverage=1 00:04:35.080 --rc genhtml_legend=1 00:04:35.080 --rc geninfo_all_blocks=1 00:04:35.080 --rc geninfo_unexecuted_blocks=1 00:04:35.080 00:04:35.080 ' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.080 --rc genhtml_branch_coverage=1 00:04:35.080 --rc genhtml_function_coverage=1 00:04:35.080 --rc genhtml_legend=1 00:04:35.080 --rc geninfo_all_blocks=1 00:04:35.080 --rc geninfo_unexecuted_blocks=1 00:04:35.080 00:04:35.080 ' 00:04:35.080 03:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.080 03:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57688 00:04:35.080 03:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.080 03:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57688 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57688 ']' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.080 03:11:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.080 [2024-11-20 03:11:24.626731] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:35.080 [2024-11-20 03:11:24.626852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57688 ] 00:04:35.340 [2024-11-20 03:11:24.801234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.340 [2024-11-20 03:11:24.912585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.278 03:11:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.278 03:11:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.278 03:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.536 03:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57688 00:04:36.536 03:11:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57688 ']' 00:04:36.536 03:11:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57688 00:04:36.536 03:11:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.536 03:11:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.536 03:11:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57688 00:04:36.536 03:11:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.536 03:11:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.536 killing process with pid 57688 00:04:36.536 03:11:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57688' 00:04:36.536 03:11:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57688 00:04:36.536 03:11:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57688 00:04:39.069 00:04:39.069 real 0m4.088s 00:04:39.069 user 0m4.094s 00:04:39.069 sys 0m0.534s 00:04:39.069 03:11:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.069 03:11:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.069 ************************************ 00:04:39.069 END TEST alias_rpc 00:04:39.069 ************************************ 00:04:39.069 03:11:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.070 03:11:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.070 03:11:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.070 03:11:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.070 03:11:28 -- common/autotest_common.sh@10 -- # set +x 00:04:39.070 ************************************ 00:04:39.070 START TEST spdkcli_tcp 00:04:39.070 ************************************ 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.070 * Looking for test storage... 00:04:39.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.070 03:11:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57791 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.070 03:11:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57791 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57791 ']' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.070 03:11:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.328 [2024-11-20 03:11:28.779875] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:39.328 [2024-11-20 03:11:28.779988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57791 ] 00:04:39.328 [2024-11-20 03:11:28.954324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.586 [2024-11-20 03:11:29.073224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.586 [2024-11-20 03:11:29.073266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.520 03:11:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.520 03:11:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:40.520 03:11:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57813 00:04:40.520 03:11:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.520 03:11:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.520 [ 00:04:40.520 "bdev_malloc_delete", 00:04:40.520 "bdev_malloc_create", 00:04:40.520 "bdev_null_resize", 00:04:40.520 "bdev_null_delete", 00:04:40.520 "bdev_null_create", 00:04:40.520 "bdev_nvme_cuse_unregister", 00:04:40.520 "bdev_nvme_cuse_register", 00:04:40.520 "bdev_opal_new_user", 00:04:40.520 "bdev_opal_set_lock_state", 00:04:40.520 "bdev_opal_delete", 00:04:40.520 "bdev_opal_get_info", 00:04:40.520 "bdev_opal_create", 00:04:40.520 "bdev_nvme_opal_revert", 00:04:40.520 "bdev_nvme_opal_init", 00:04:40.520 "bdev_nvme_send_cmd", 00:04:40.520 "bdev_nvme_set_keys", 00:04:40.520 "bdev_nvme_get_path_iostat", 00:04:40.520 "bdev_nvme_get_mdns_discovery_info", 00:04:40.520 "bdev_nvme_stop_mdns_discovery", 00:04:40.520 "bdev_nvme_start_mdns_discovery", 00:04:40.520 "bdev_nvme_set_multipath_policy", 00:04:40.520 "bdev_nvme_set_preferred_path", 00:04:40.520 "bdev_nvme_get_io_paths", 00:04:40.520 "bdev_nvme_remove_error_injection", 00:04:40.520 "bdev_nvme_add_error_injection", 00:04:40.520 "bdev_nvme_get_discovery_info", 00:04:40.520 "bdev_nvme_stop_discovery", 00:04:40.520 "bdev_nvme_start_discovery", 00:04:40.520 "bdev_nvme_get_controller_health_info", 00:04:40.520 "bdev_nvme_disable_controller", 00:04:40.520 "bdev_nvme_enable_controller", 00:04:40.520 "bdev_nvme_reset_controller", 00:04:40.520 "bdev_nvme_get_transport_statistics", 00:04:40.520 "bdev_nvme_apply_firmware", 00:04:40.520 "bdev_nvme_detach_controller", 00:04:40.520 "bdev_nvme_get_controllers", 00:04:40.520 "bdev_nvme_attach_controller", 00:04:40.520 "bdev_nvme_set_hotplug", 00:04:40.520 "bdev_nvme_set_options", 00:04:40.520 "bdev_passthru_delete", 00:04:40.520 "bdev_passthru_create", 00:04:40.520 "bdev_lvol_set_parent_bdev", 00:04:40.520 "bdev_lvol_set_parent", 00:04:40.520 "bdev_lvol_check_shallow_copy", 00:04:40.520 "bdev_lvol_start_shallow_copy", 00:04:40.520 "bdev_lvol_grow_lvstore", 00:04:40.520 "bdev_lvol_get_lvols", 00:04:40.520 "bdev_lvol_get_lvstores", 00:04:40.520 "bdev_lvol_delete", 00:04:40.520 "bdev_lvol_set_read_only", 00:04:40.520 "bdev_lvol_resize", 00:04:40.520 "bdev_lvol_decouple_parent", 00:04:40.520 "bdev_lvol_inflate", 00:04:40.520 "bdev_lvol_rename", 00:04:40.520 "bdev_lvol_clone_bdev", 00:04:40.520 "bdev_lvol_clone", 00:04:40.520 "bdev_lvol_snapshot", 00:04:40.520 "bdev_lvol_create", 00:04:40.520 "bdev_lvol_delete_lvstore", 00:04:40.520 "bdev_lvol_rename_lvstore", 00:04:40.520 "bdev_lvol_create_lvstore", 00:04:40.520 "bdev_raid_set_options", 00:04:40.520 "bdev_raid_remove_base_bdev", 00:04:40.520 "bdev_raid_add_base_bdev", 00:04:40.520 "bdev_raid_delete", 00:04:40.520 "bdev_raid_create", 00:04:40.520 "bdev_raid_get_bdevs", 00:04:40.520 "bdev_error_inject_error", 00:04:40.520 "bdev_error_delete", 00:04:40.520 "bdev_error_create", 00:04:40.520 "bdev_split_delete", 00:04:40.520 "bdev_split_create", 00:04:40.520 "bdev_delay_delete", 00:04:40.520 "bdev_delay_create", 00:04:40.520 "bdev_delay_update_latency", 00:04:40.520 "bdev_zone_block_delete", 00:04:40.520 "bdev_zone_block_create", 00:04:40.520 "blobfs_create", 00:04:40.520 "blobfs_detect", 00:04:40.520 "blobfs_set_cache_size", 00:04:40.520 "bdev_aio_delete", 00:04:40.520 "bdev_aio_rescan", 00:04:40.521 "bdev_aio_create", 00:04:40.521 "bdev_ftl_set_property", 00:04:40.521 "bdev_ftl_get_properties", 00:04:40.521 "bdev_ftl_get_stats", 00:04:40.521 "bdev_ftl_unmap", 00:04:40.521 "bdev_ftl_unload", 00:04:40.521 "bdev_ftl_delete", 00:04:40.521 "bdev_ftl_load", 00:04:40.521 "bdev_ftl_create", 00:04:40.521 "bdev_virtio_attach_controller", 00:04:40.521 "bdev_virtio_scsi_get_devices", 00:04:40.521 "bdev_virtio_detach_controller", 00:04:40.521 "bdev_virtio_blk_set_hotplug", 00:04:40.521 "bdev_iscsi_delete", 00:04:40.521 "bdev_iscsi_create", 00:04:40.521 "bdev_iscsi_set_options", 00:04:40.521 "accel_error_inject_error", 00:04:40.521 "ioat_scan_accel_module", 00:04:40.521 "dsa_scan_accel_module", 00:04:40.521 "iaa_scan_accel_module", 00:04:40.521 "keyring_file_remove_key", 00:04:40.521 "keyring_file_add_key", 00:04:40.521 "keyring_linux_set_options", 00:04:40.521 "fsdev_aio_delete", 00:04:40.521 "fsdev_aio_create", 00:04:40.521 "iscsi_get_histogram", 00:04:40.521 "iscsi_enable_histogram", 00:04:40.521 "iscsi_set_options", 00:04:40.521 "iscsi_get_auth_groups", 00:04:40.521 "iscsi_auth_group_remove_secret", 00:04:40.521 "iscsi_auth_group_add_secret", 00:04:40.521 "iscsi_delete_auth_group", 00:04:40.521 "iscsi_create_auth_group", 00:04:40.521 "iscsi_set_discovery_auth", 00:04:40.521 "iscsi_get_options", 00:04:40.521 "iscsi_target_node_request_logout", 00:04:40.521 "iscsi_target_node_set_redirect", 00:04:40.521 "iscsi_target_node_set_auth", 00:04:40.521 "iscsi_target_node_add_lun", 00:04:40.521 "iscsi_get_stats", 00:04:40.521 "iscsi_get_connections", 00:04:40.521 "iscsi_portal_group_set_auth", 00:04:40.521 "iscsi_start_portal_group", 00:04:40.521 "iscsi_delete_portal_group", 00:04:40.521 "iscsi_create_portal_group", 00:04:40.521 "iscsi_get_portal_groups", 00:04:40.521 "iscsi_delete_target_node", 00:04:40.521 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.521 "iscsi_target_node_add_pg_ig_maps", 00:04:40.521 "iscsi_create_target_node", 00:04:40.521 "iscsi_get_target_nodes", 00:04:40.521 "iscsi_delete_initiator_group", 00:04:40.521 "iscsi_initiator_group_remove_initiators", 00:04:40.521 "iscsi_initiator_group_add_initiators", 00:04:40.521 "iscsi_create_initiator_group", 00:04:40.521 "iscsi_get_initiator_groups", 00:04:40.521 "nvmf_set_crdt", 00:04:40.521 "nvmf_set_config", 00:04:40.521 "nvmf_set_max_subsystems", 00:04:40.521 "nvmf_stop_mdns_prr", 00:04:40.521 "nvmf_publish_mdns_prr", 00:04:40.521 "nvmf_subsystem_get_listeners", 00:04:40.521 "nvmf_subsystem_get_qpairs", 00:04:40.521 "nvmf_subsystem_get_controllers", 00:04:40.521 "nvmf_get_stats", 00:04:40.521 "nvmf_get_transports", 00:04:40.521 "nvmf_create_transport", 00:04:40.521 "nvmf_get_targets", 00:04:40.521 "nvmf_delete_target", 00:04:40.521 "nvmf_create_target", 00:04:40.521 "nvmf_subsystem_allow_any_host", 00:04:40.521 "nvmf_subsystem_set_keys", 00:04:40.521 "nvmf_subsystem_remove_host", 00:04:40.521 "nvmf_subsystem_add_host", 00:04:40.521 "nvmf_ns_remove_host", 00:04:40.521 "nvmf_ns_add_host", 00:04:40.521 "nvmf_subsystem_remove_ns", 00:04:40.521 "nvmf_subsystem_set_ns_ana_group", 00:04:40.521 "nvmf_subsystem_add_ns", 00:04:40.521 "nvmf_subsystem_listener_set_ana_state", 00:04:40.521 "nvmf_discovery_get_referrals", 00:04:40.521 "nvmf_discovery_remove_referral", 00:04:40.521 "nvmf_discovery_add_referral", 00:04:40.521 "nvmf_subsystem_remove_listener", 00:04:40.521 "nvmf_subsystem_add_listener", 00:04:40.521 "nvmf_delete_subsystem", 00:04:40.521 "nvmf_create_subsystem", 00:04:40.521 "nvmf_get_subsystems", 00:04:40.521 "env_dpdk_get_mem_stats", 00:04:40.521 "nbd_get_disks", 00:04:40.521 "nbd_stop_disk", 00:04:40.521 "nbd_start_disk", 00:04:40.521 "ublk_recover_disk", 00:04:40.521 "ublk_get_disks", 00:04:40.521 "ublk_stop_disk", 00:04:40.521 "ublk_start_disk", 00:04:40.521 "ublk_destroy_target", 00:04:40.521 "ublk_create_target", 00:04:40.521 "virtio_blk_create_transport", 00:04:40.521 "virtio_blk_get_transports", 00:04:40.521 "vhost_controller_set_coalescing", 00:04:40.521 "vhost_get_controllers", 00:04:40.521 "vhost_delete_controller", 00:04:40.521 "vhost_create_blk_controller", 00:04:40.521 "vhost_scsi_controller_remove_target", 00:04:40.521 "vhost_scsi_controller_add_target", 00:04:40.521 "vhost_start_scsi_controller", 00:04:40.521 "vhost_create_scsi_controller", 00:04:40.521 "thread_set_cpumask", 00:04:40.521 "scheduler_set_options", 00:04:40.521 "framework_get_governor", 00:04:40.521 "framework_get_scheduler", 00:04:40.521 "framework_set_scheduler", 00:04:40.521 "framework_get_reactors", 00:04:40.521 "thread_get_io_channels", 00:04:40.521 "thread_get_pollers", 00:04:40.521 "thread_get_stats", 00:04:40.521 "framework_monitor_context_switch", 00:04:40.521 "spdk_kill_instance", 00:04:40.521 "log_enable_timestamps", 00:04:40.521 "log_get_flags", 00:04:40.521 "log_clear_flag", 00:04:40.521 "log_set_flag", 00:04:40.521 "log_get_level", 00:04:40.521 "log_set_level", 00:04:40.521 "log_get_print_level", 00:04:40.521 "log_set_print_level", 00:04:40.521 "framework_enable_cpumask_locks", 00:04:40.521 "framework_disable_cpumask_locks", 00:04:40.521 "framework_wait_init", 00:04:40.521 "framework_start_init", 00:04:40.521 "scsi_get_devices", 00:04:40.521 "bdev_get_histogram", 00:04:40.521 "bdev_enable_histogram", 00:04:40.521 "bdev_set_qos_limit", 00:04:40.521 "bdev_set_qd_sampling_period", 00:04:40.521 "bdev_get_bdevs", 00:04:40.521 "bdev_reset_iostat", 00:04:40.521 "bdev_get_iostat", 00:04:40.521 "bdev_examine", 00:04:40.521 "bdev_wait_for_examine", 00:04:40.521 "bdev_set_options", 00:04:40.521 "accel_get_stats", 00:04:40.521 "accel_set_options", 00:04:40.521 "accel_set_driver", 00:04:40.521 "accel_crypto_key_destroy", 00:04:40.521 "accel_crypto_keys_get", 00:04:40.521 "accel_crypto_key_create", 00:04:40.521 "accel_assign_opc", 00:04:40.521 "accel_get_module_info", 00:04:40.521 "accel_get_opc_assignments", 00:04:40.521 "vmd_rescan", 00:04:40.521 "vmd_remove_device", 00:04:40.521 "vmd_enable", 00:04:40.521 "sock_get_default_impl", 00:04:40.521 "sock_set_default_impl", 00:04:40.521 "sock_impl_set_options", 00:04:40.521 "sock_impl_get_options", 00:04:40.521 "iobuf_get_stats", 00:04:40.521 "iobuf_set_options", 00:04:40.521 "keyring_get_keys", 00:04:40.521 "framework_get_pci_devices", 00:04:40.521 "framework_get_config", 00:04:40.521 "framework_get_subsystems", 00:04:40.521 "fsdev_set_opts", 00:04:40.521 "fsdev_get_opts", 00:04:40.521 "trace_get_info", 00:04:40.521 "trace_get_tpoint_group_mask", 00:04:40.521 "trace_disable_tpoint_group", 00:04:40.521 "trace_enable_tpoint_group", 00:04:40.521 "trace_clear_tpoint_mask", 00:04:40.521 "trace_set_tpoint_mask", 00:04:40.521 "notify_get_notifications", 00:04:40.521 "notify_get_types", 00:04:40.521 "spdk_get_version", 00:04:40.521 "rpc_get_methods" 00:04:40.521 ] 00:04:40.521 03:11:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.521 03:11:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.521 03:11:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.779 03:11:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.779 03:11:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57791 00:04:40.779 03:11:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57791 ']' 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57791 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57791 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.780 killing process with pid 57791 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57791' 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57791 00:04:40.780 03:11:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57791 00:04:43.308 00:04:43.308 real 0m4.231s 00:04:43.308 user 0m7.557s 00:04:43.308 sys 0m0.614s 00:04:43.308 03:11:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.308 03:11:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.308 ************************************ 00:04:43.308 END TEST spdkcli_tcp 00:04:43.308 ************************************ 00:04:43.308 03:11:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.308 03:11:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.308 03:11:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.308 03:11:32 -- common/autotest_common.sh@10 -- # set +x 00:04:43.308 ************************************ 00:04:43.308 START TEST dpdk_mem_utility 00:04:43.308 ************************************ 00:04:43.308 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.308 * Looking for test storage... 00:04:43.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:43.308 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.308 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.308 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.565 03:11:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.565 --rc genhtml_branch_coverage=1 00:04:43.565 --rc genhtml_function_coverage=1 00:04:43.565 --rc genhtml_legend=1 00:04:43.565 --rc geninfo_all_blocks=1 00:04:43.565 --rc geninfo_unexecuted_blocks=1 00:04:43.565 00:04:43.565 ' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.565 --rc genhtml_branch_coverage=1 00:04:43.565 --rc genhtml_function_coverage=1 00:04:43.565 --rc genhtml_legend=1 00:04:43.565 --rc geninfo_all_blocks=1 00:04:43.565 --rc geninfo_unexecuted_blocks=1 00:04:43.565 00:04:43.565 ' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.565 --rc genhtml_branch_coverage=1 00:04:43.565 --rc genhtml_function_coverage=1 00:04:43.565 --rc genhtml_legend=1 00:04:43.565 --rc geninfo_all_blocks=1 00:04:43.565 --rc geninfo_unexecuted_blocks=1 00:04:43.565 00:04:43.565 ' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.565 --rc genhtml_branch_coverage=1 00:04:43.565 --rc genhtml_function_coverage=1 00:04:43.565 --rc genhtml_legend=1 00:04:43.565 --rc geninfo_all_blocks=1 00:04:43.565 --rc geninfo_unexecuted_blocks=1 00:04:43.565 00:04:43.565 ' 00:04:43.565 03:11:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:43.565 03:11:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57917 00:04:43.565 03:11:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.565 03:11:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57917 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57917 ']' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.565 03:11:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.565 [2024-11-20 03:11:33.064815] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:43.565 [2024-11-20 03:11:33.064959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57917 ] 00:04:43.824 [2024-11-20 03:11:33.235049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.824 [2024-11-20 03:11:33.349331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.762 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.762 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:44.762 03:11:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.762 03:11:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.762 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.762 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.762 { 00:04:44.762 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.762 } 00:04:44.762 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.762 03:11:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:44.762 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:44.762 1 heaps totaling size 816.000000 MiB 00:04:44.762 size: 816.000000 MiB heap id: 0 00:04:44.762 end heaps---------- 00:04:44.762 9 mempools totaling size 595.772034 MiB 00:04:44.762 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.762 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.762 size: 92.545471 MiB name: bdev_io_57917 00:04:44.762 size: 50.003479 MiB name: msgpool_57917 00:04:44.762 size: 36.509338 MiB name: fsdev_io_57917 00:04:44.762 size: 21.763794 MiB name: PDU_Pool 00:04:44.762 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.762 size: 4.133484 MiB name: evtpool_57917 00:04:44.762 size: 0.026123 MiB name: Session_Pool 00:04:44.762 end mempools------- 00:04:44.762 6 memzones totaling size 4.142822 MiB 00:04:44.762 size: 1.000366 MiB name: RG_ring_0_57917 00:04:44.762 size: 1.000366 MiB name: RG_ring_1_57917 00:04:44.762 size: 1.000366 MiB name: RG_ring_4_57917 00:04:44.762 size: 1.000366 MiB name: RG_ring_5_57917 00:04:44.762 size: 0.125366 MiB name: RG_ring_2_57917 00:04:44.762 size: 0.015991 MiB name: RG_ring_3_57917 00:04:44.762 end memzones------- 00:04:44.762 03:11:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.762 heap id: 0 total size: 816.000000 MiB number of busy elements: 312 number of free elements: 18 00:04:44.762 list of free elements. size: 16.792114 MiB 00:04:44.762 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:44.762 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:44.762 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:44.762 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:44.762 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:44.762 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:44.762 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:44.762 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:44.762 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:44.762 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:44.762 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:44.762 element at address: 0x20001ac00000 with size: 0.562683 MiB 00:04:44.762 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:44.762 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:44.762 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:44.762 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:44.762 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:44.762 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:44.762 list of standard malloc elements. size: 199.286987 MiB 00:04:44.762 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:44.762 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:44.762 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:44.762 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:44.762 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:44.762 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:44.762 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:44.762 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:44.762 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:44.762 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:44.762 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:44.762 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:44.762 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:44.762 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:44.762 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:44.763 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:44.763 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:44.763 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:44.764 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:44.764 list of memzone associated elements. size: 599.920898 MiB 00:04:44.764 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:44.764 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.764 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:44.764 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.764 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:44.764 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57917_0 00:04:44.764 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:44.764 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57917_0 00:04:44.764 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:44.764 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57917_0 00:04:44.764 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:44.764 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.764 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:44.764 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.764 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:44.764 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57917_0 00:04:44.764 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:44.764 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57917 00:04:44.764 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:44.764 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57917 00:04:44.764 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:44.764 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.764 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:44.764 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.764 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:44.764 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.764 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:44.764 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.764 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:44.764 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57917 00:04:44.764 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:44.764 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57917 00:04:44.764 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:44.764 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57917 00:04:44.764 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:44.764 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57917 00:04:44.764 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:44.764 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57917 00:04:44.764 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:44.764 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57917 00:04:44.764 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:44.764 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.764 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:44.764 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.764 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:44.764 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.764 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:44.764 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57917 00:04:44.764 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:44.764 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57917 00:04:44.764 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:44.764 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.764 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:44.764 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.764 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:44.764 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57917 00:04:44.764 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:44.764 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.764 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:44.764 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57917 00:04:44.764 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:44.764 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57917 00:04:44.764 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:44.764 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57917 00:04:44.764 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:44.764 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.764 03:11:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.764 03:11:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57917 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57917 ']' 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57917 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57917 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.764 killing process with pid 57917 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57917' 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57917 00:04:44.764 03:11:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57917 00:04:47.297 00:04:47.297 real 0m3.947s 00:04:47.297 user 0m3.906s 00:04:47.297 sys 0m0.533s 00:04:47.297 03:11:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.297 ************************************ 00:04:47.297 END TEST dpdk_mem_utility 00:04:47.297 ************************************ 00:04:47.297 03:11:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.297 03:11:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:47.297 03:11:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.297 03:11:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.297 03:11:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.297 ************************************ 00:04:47.297 START TEST event 00:04:47.297 ************************************ 00:04:47.297 03:11:36 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:47.297 * Looking for test storage... 00:04:47.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:47.297 03:11:36 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.297 03:11:36 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.297 03:11:36 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.556 03:11:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.556 03:11:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.556 03:11:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.556 03:11:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.556 03:11:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.556 03:11:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.556 03:11:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.556 03:11:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.556 03:11:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.556 03:11:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.556 03:11:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.556 03:11:36 event -- scripts/common.sh@344 -- # case "$op" in 00:04:47.556 03:11:36 event -- scripts/common.sh@345 -- # : 1 00:04:47.556 03:11:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.556 03:11:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.556 03:11:36 event -- scripts/common.sh@365 -- # decimal 1 00:04:47.556 03:11:36 event -- scripts/common.sh@353 -- # local d=1 00:04:47.556 03:11:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.556 03:11:36 event -- scripts/common.sh@355 -- # echo 1 00:04:47.556 03:11:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.556 03:11:36 event -- scripts/common.sh@366 -- # decimal 2 00:04:47.556 03:11:36 event -- scripts/common.sh@353 -- # local d=2 00:04:47.556 03:11:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.556 03:11:36 event -- scripts/common.sh@355 -- # echo 2 00:04:47.556 03:11:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.556 03:11:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.556 03:11:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.556 03:11:36 event -- scripts/common.sh@368 -- # return 0 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.556 --rc genhtml_branch_coverage=1 00:04:47.556 --rc genhtml_function_coverage=1 00:04:47.556 --rc genhtml_legend=1 00:04:47.556 --rc geninfo_all_blocks=1 00:04:47.556 --rc geninfo_unexecuted_blocks=1 00:04:47.556 00:04:47.556 ' 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.556 --rc genhtml_branch_coverage=1 00:04:47.556 --rc genhtml_function_coverage=1 00:04:47.556 --rc genhtml_legend=1 00:04:47.556 --rc geninfo_all_blocks=1 00:04:47.556 --rc geninfo_unexecuted_blocks=1 00:04:47.556 00:04:47.556 ' 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.556 --rc genhtml_branch_coverage=1 00:04:47.556 --rc genhtml_function_coverage=1 00:04:47.556 --rc genhtml_legend=1 00:04:47.556 --rc geninfo_all_blocks=1 00:04:47.556 --rc geninfo_unexecuted_blocks=1 00:04:47.556 00:04:47.556 ' 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.556 --rc genhtml_branch_coverage=1 00:04:47.556 --rc genhtml_function_coverage=1 00:04:47.556 --rc genhtml_legend=1 00:04:47.556 --rc geninfo_all_blocks=1 00:04:47.556 --rc geninfo_unexecuted_blocks=1 00:04:47.556 00:04:47.556 ' 00:04:47.556 03:11:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:47.556 03:11:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.556 03:11:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:47.556 03:11:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.556 03:11:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.556 ************************************ 00:04:47.556 START TEST event_perf 00:04:47.556 ************************************ 00:04:47.556 03:11:36 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.556 Running I/O for 1 seconds...[2024-11-20 03:11:37.036945] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:47.556 [2024-11-20 03:11:37.037049] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58021 ] 00:04:47.817 [2024-11-20 03:11:37.212877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.817 [2024-11-20 03:11:37.337693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.817 [2024-11-20 03:11:37.337837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.817 [2024-11-20 03:11:37.337895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.817 Running I/O for 1 seconds...[2024-11-20 03:11:37.337936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.198 00:04:49.198 lcore 0: 203348 00:04:49.198 lcore 1: 203348 00:04:49.198 lcore 2: 203348 00:04:49.198 lcore 3: 203350 00:04:49.198 done. 00:04:49.198 00:04:49.198 real 0m1.590s 00:04:49.198 user 0m4.364s 00:04:49.198 sys 0m0.106s 00:04:49.198 03:11:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.198 03:11:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.198 ************************************ 00:04:49.198 END TEST event_perf 00:04:49.198 ************************************ 00:04:49.198 03:11:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.198 03:11:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:49.198 03:11:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.198 03:11:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.198 ************************************ 00:04:49.198 START TEST event_reactor 00:04:49.198 ************************************ 00:04:49.198 03:11:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.198 [2024-11-20 03:11:38.685965] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:49.198 [2024-11-20 03:11:38.686082] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58060 ] 00:04:49.458 [2024-11-20 03:11:38.861674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.458 [2024-11-20 03:11:38.976676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.853 test_start 00:04:50.853 oneshot 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 250 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 250 00:04:50.853 tick 500 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 250 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 test_end 00:04:50.853 00:04:50.853 real 0m1.568s 00:04:50.853 user 0m1.353s 00:04:50.853 sys 0m0.107s 00:04:50.853 03:11:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.853 03:11:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:50.853 ************************************ 00:04:50.853 END TEST event_reactor 00:04:50.853 ************************************ 00:04:50.853 03:11:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.853 03:11:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:50.853 03:11:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.853 03:11:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.853 ************************************ 00:04:50.853 START TEST event_reactor_perf 00:04:50.853 ************************************ 00:04:50.853 03:11:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.853 [2024-11-20 03:11:40.323812] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:50.853 [2024-11-20 03:11:40.323913] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:04:51.112 [2024-11-20 03:11:40.489976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.112 [2024-11-20 03:11:40.599503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.493 test_start 00:04:52.493 test_end 00:04:52.493 Performance: 372736 events per second 00:04:52.493 00:04:52.493 real 0m1.547s 00:04:52.493 user 0m1.354s 00:04:52.493 sys 0m0.086s 00:04:52.493 ************************************ 00:04:52.493 END TEST event_reactor_perf 00:04:52.493 ************************************ 00:04:52.493 03:11:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.493 03:11:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.493 03:11:41 event -- event/event.sh@49 -- # uname -s 00:04:52.493 03:11:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.493 03:11:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.493 03:11:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.493 03:11:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.493 03:11:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.493 ************************************ 00:04:52.493 START TEST event_scheduler 00:04:52.493 ************************************ 00:04:52.493 03:11:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.493 * Looking for test storage... 00:04:52.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.493 03:11:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.493 --rc genhtml_branch_coverage=1 00:04:52.493 --rc genhtml_function_coverage=1 00:04:52.493 --rc genhtml_legend=1 00:04:52.493 --rc geninfo_all_blocks=1 00:04:52.493 --rc geninfo_unexecuted_blocks=1 00:04:52.493 00:04:52.493 ' 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.493 --rc genhtml_branch_coverage=1 00:04:52.493 --rc genhtml_function_coverage=1 00:04:52.493 --rc genhtml_legend=1 00:04:52.493 --rc geninfo_all_blocks=1 00:04:52.493 --rc geninfo_unexecuted_blocks=1 00:04:52.493 00:04:52.493 ' 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.493 --rc genhtml_branch_coverage=1 00:04:52.493 --rc genhtml_function_coverage=1 00:04:52.493 --rc genhtml_legend=1 00:04:52.493 --rc geninfo_all_blocks=1 00:04:52.493 --rc geninfo_unexecuted_blocks=1 00:04:52.493 00:04:52.493 ' 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.493 --rc genhtml_branch_coverage=1 00:04:52.493 --rc genhtml_function_coverage=1 00:04:52.493 --rc genhtml_legend=1 00:04:52.493 --rc geninfo_all_blocks=1 00:04:52.493 --rc geninfo_unexecuted_blocks=1 00:04:52.493 00:04:52.493 ' 00:04:52.493 03:11:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.493 03:11:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58173 00:04:52.493 03:11:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.493 03:11:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.493 03:11:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58173 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58173 ']' 00:04:52.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.493 03:11:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.754 [2024-11-20 03:11:42.195122] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:52.754 [2024-11-20 03:11:42.195249] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:04:52.754 [2024-11-20 03:11:42.367730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.013 [2024-11-20 03:11:42.488660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.013 [2024-11-20 03:11:42.488822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.013 [2024-11-20 03:11:42.488755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.013 [2024-11-20 03:11:42.488784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:53.583 03:11:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.583 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.583 POWER: Cannot set governor of lcore 0 to performance 00:04:53.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.583 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.583 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.583 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:53.583 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:53.583 POWER: Unable to set Power Management Environment for lcore 0 00:04:53.583 [2024-11-20 03:11:43.045457] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:53.583 [2024-11-20 03:11:43.045479] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:53.583 [2024-11-20 03:11:43.045490] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:53.583 [2024-11-20 03:11:43.045509] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:53.583 [2024-11-20 03:11:43.045518] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:53.583 [2024-11-20 03:11:43.045527] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.583 03:11:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.583 03:11:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 [2024-11-20 03:11:43.352536] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:53.844 03:11:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:53.844 03:11:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.844 03:11:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 ************************************ 00:04:53.844 START TEST scheduler_create_thread 00:04:53.844 ************************************ 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 2 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 3 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 4 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 5 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 6 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 7 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 8 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 9 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.844 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.104 10 00:04:54.104 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.104 03:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.104 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.104 03:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.484 03:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.484 03:11:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.484 03:11:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.484 03:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.484 03:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.052 03:11:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.052 03:11:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.052 03:11:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.052 03:11:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.990 03:11:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.990 03:11:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.990 03:11:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.990 03:11:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.990 03:11:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.931 03:11:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.931 ************************************ 00:04:57.931 END TEST scheduler_create_thread 00:04:57.931 ************************************ 00:04:57.931 00:04:57.931 real 0m3.882s 00:04:57.931 user 0m0.026s 00:04:57.931 sys 0m0.008s 00:04:57.931 03:11:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.931 03:11:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.931 03:11:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.931 03:11:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58173 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58173 ']' 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58173 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58173 00:04:57.931 killing process with pid 58173 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58173' 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58173 00:04:57.931 03:11:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58173 00:04:58.191 [2024-11-20 03:11:47.625356] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:59.130 ************************************ 00:04:59.130 END TEST event_scheduler 00:04:59.130 ************************************ 00:04:59.130 00:04:59.130 real 0m6.863s 00:04:59.130 user 0m14.265s 00:04:59.130 sys 0m0.468s 00:04:59.130 03:11:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.130 03:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.390 03:11:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:59.390 03:11:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:59.390 03:11:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.390 03:11:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.390 03:11:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.390 ************************************ 00:04:59.390 START TEST app_repeat 00:04:59.390 ************************************ 00:04:59.390 03:11:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58295 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:59.390 Process app_repeat pid: 58295 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58295' 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.390 spdk_app_start Round 0 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:59.390 03:11:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58295 /var/tmp/spdk-nbd.sock 00:04:59.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.390 03:11:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58295 ']' 00:04:59.390 03:11:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.390 03:11:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.391 03:11:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.391 03:11:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.391 03:11:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.391 [2024-11-20 03:11:48.888914] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:59.391 [2024-11-20 03:11:48.889043] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58295 ] 00:04:59.650 [2024-11-20 03:11:49.062859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.650 [2024-11-20 03:11:49.176342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.650 [2024-11-20 03:11:49.176376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.219 03:11:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.219 03:11:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.219 03:11:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.479 Malloc0 00:05:00.479 03:11:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.748 Malloc1 00:05:00.748 03:11:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.748 03:11:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.023 /dev/nbd0 00:05:01.023 03:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.023 03:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.023 1+0 records in 00:05:01.023 1+0 records out 00:05:01.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336659 s, 12.2 MB/s 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.023 03:11:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.023 03:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.023 03:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.023 03:11:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.282 /dev/nbd1 00:05:01.282 03:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.282 03:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.282 03:11:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:01.282 03:11:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.282 03:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.283 1+0 records in 00:05:01.283 1+0 records out 00:05:01.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440926 s, 9.3 MB/s 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.283 03:11:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.283 03:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.283 03:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.283 03:11:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.283 03:11:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.283 03:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.543 { 00:05:01.543 "nbd_device": "/dev/nbd0", 00:05:01.543 "bdev_name": "Malloc0" 00:05:01.543 }, 00:05:01.543 { 00:05:01.543 "nbd_device": "/dev/nbd1", 00:05:01.543 "bdev_name": "Malloc1" 00:05:01.543 } 00:05:01.543 ]' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.543 { 00:05:01.543 "nbd_device": "/dev/nbd0", 00:05:01.543 "bdev_name": "Malloc0" 00:05:01.543 }, 00:05:01.543 { 00:05:01.543 "nbd_device": "/dev/nbd1", 00:05:01.543 "bdev_name": "Malloc1" 00:05:01.543 } 00:05:01.543 ]' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.543 /dev/nbd1' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.543 /dev/nbd1' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.543 03:11:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.543 256+0 records in 00:05:01.543 256+0 records out 00:05:01.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121829 s, 86.1 MB/s 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.543 256+0 records in 00:05:01.543 256+0 records out 00:05:01.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176537 s, 59.4 MB/s 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.543 256+0 records in 00:05:01.543 256+0 records out 00:05:01.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281808 s, 37.2 MB/s 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.543 03:11:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.544 03:11:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.544 03:11:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.544 03:11:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.544 03:11:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.803 03:11:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.063 03:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.323 03:11:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.323 03:11:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.582 03:11:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.965 [2024-11-20 03:11:53.315542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.965 [2024-11-20 03:11:53.426017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.965 [2024-11-20 03:11:53.426018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.226 [2024-11-20 03:11:53.610595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.226 [2024-11-20 03:11:53.610664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.608 03:11:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.608 spdk_app_start Round 1 00:05:05.608 03:11:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:05.608 03:11:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58295 /var/tmp/spdk-nbd.sock 00:05:05.608 03:11:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58295 ']' 00:05:05.608 03:11:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.608 03:11:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.608 03:11:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.608 03:11:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.608 03:11:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.869 03:11:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.869 03:11:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.869 03:11:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.129 Malloc0 00:05:06.129 03:11:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.390 Malloc1 00:05:06.390 03:11:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.390 03:11:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.649 /dev/nbd0 00:05:06.649 03:11:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.649 03:11:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.649 1+0 records in 00:05:06.649 1+0 records out 00:05:06.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381941 s, 10.7 MB/s 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.649 03:11:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.649 03:11:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.649 03:11:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.649 03:11:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.908 /dev/nbd1 00:05:06.908 03:11:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.908 03:11:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.908 03:11:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.908 1+0 records in 00:05:06.908 1+0 records out 00:05:06.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241064 s, 17.0 MB/s 00:05:06.909 03:11:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.909 03:11:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.909 03:11:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.909 03:11:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.909 03:11:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.909 03:11:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.909 03:11:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.909 03:11:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.909 03:11:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.909 03:11:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.168 { 00:05:07.168 "nbd_device": "/dev/nbd0", 00:05:07.168 "bdev_name": "Malloc0" 00:05:07.168 }, 00:05:07.168 { 00:05:07.168 "nbd_device": "/dev/nbd1", 00:05:07.168 "bdev_name": "Malloc1" 00:05:07.168 } 00:05:07.168 ]' 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.168 { 00:05:07.168 "nbd_device": "/dev/nbd0", 00:05:07.168 "bdev_name": "Malloc0" 00:05:07.168 }, 00:05:07.168 { 00:05:07.168 "nbd_device": "/dev/nbd1", 00:05:07.168 "bdev_name": "Malloc1" 00:05:07.168 } 00:05:07.168 ]' 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.168 /dev/nbd1' 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.168 /dev/nbd1' 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.168 03:11:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.169 03:11:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.169 03:11:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.169 03:11:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.169 256+0 records in 00:05:07.169 256+0 records out 00:05:07.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135484 s, 77.4 MB/s 00:05:07.169 03:11:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.169 03:11:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.428 256+0 records in 00:05:07.428 256+0 records out 00:05:07.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238168 s, 44.0 MB/s 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.428 256+0 records in 00:05:07.428 256+0 records out 00:05:07.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267 s, 39.3 MB/s 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.428 03:11:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.687 03:11:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.946 03:11:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.205 03:11:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.206 03:11:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.206 03:11:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.206 03:11:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.465 03:11:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.843 [2024-11-20 03:11:59.148527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.843 [2024-11-20 03:11:59.259105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.843 [2024-11-20 03:11:59.259126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.843 [2024-11-20 03:11:59.461818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.843 [2024-11-20 03:11:59.461907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.752 03:12:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.752 spdk_app_start Round 2 00:05:11.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.752 03:12:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.752 03:12:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58295 /var/tmp/spdk-nbd.sock 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58295 ']' 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.752 03:12:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.752 03:12:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.011 Malloc0 00:05:12.011 03:12:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.271 Malloc1 00:05:12.271 03:12:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.271 03:12:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.530 /dev/nbd0 00:05:12.530 03:12:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.530 03:12:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.530 1+0 records in 00:05:12.530 1+0 records out 00:05:12.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342986 s, 11.9 MB/s 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.530 03:12:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.530 03:12:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.530 03:12:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.530 03:12:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.789 /dev/nbd1 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.789 1+0 records in 00:05:12.789 1+0 records out 00:05:12.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404381 s, 10.1 MB/s 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.789 03:12:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.789 03:12:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.049 { 00:05:13.049 "nbd_device": "/dev/nbd0", 00:05:13.049 "bdev_name": "Malloc0" 00:05:13.049 }, 00:05:13.049 { 00:05:13.049 "nbd_device": "/dev/nbd1", 00:05:13.049 "bdev_name": "Malloc1" 00:05:13.049 } 00:05:13.049 ]' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.049 { 00:05:13.049 "nbd_device": "/dev/nbd0", 00:05:13.049 "bdev_name": "Malloc0" 00:05:13.049 }, 00:05:13.049 { 00:05:13.049 "nbd_device": "/dev/nbd1", 00:05:13.049 "bdev_name": "Malloc1" 00:05:13.049 } 00:05:13.049 ]' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.049 /dev/nbd1' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.049 /dev/nbd1' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.049 256+0 records in 00:05:13.049 256+0 records out 00:05:13.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146542 s, 71.6 MB/s 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.049 256+0 records in 00:05:13.049 256+0 records out 00:05:13.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186166 s, 56.3 MB/s 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.049 256+0 records in 00:05:13.049 256+0 records out 00:05:13.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295654 s, 35.5 MB/s 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.049 03:12:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.308 03:12:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.567 03:12:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.826 03:12:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.826 03:12:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.394 03:12:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.331 [2024-11-20 03:12:04.931656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.590 [2024-11-20 03:12:05.043170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.590 [2024-11-20 03:12:05.043172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.849 [2024-11-20 03:12:05.233246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.849 [2024-11-20 03:12:05.233333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.242 03:12:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58295 /var/tmp/spdk-nbd.sock 00:05:17.242 03:12:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58295 ']' 00:05:17.242 03:12:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.242 03:12:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.242 03:12:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.243 03:12:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.243 03:12:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.502 03:12:07 event.app_repeat -- event/event.sh@39 -- # killprocess 58295 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58295 ']' 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58295 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58295 00:05:17.502 killing process with pid 58295 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58295' 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58295 00:05:17.502 03:12:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58295 00:05:18.439 spdk_app_start is called in Round 0. 00:05:18.439 Shutdown signal received, stop current app iteration 00:05:18.439 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:18.439 spdk_app_start is called in Round 1. 00:05:18.439 Shutdown signal received, stop current app iteration 00:05:18.439 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:18.439 spdk_app_start is called in Round 2. 00:05:18.439 Shutdown signal received, stop current app iteration 00:05:18.439 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:18.439 spdk_app_start is called in Round 3. 00:05:18.439 Shutdown signal received, stop current app iteration 00:05:18.698 03:12:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.698 03:12:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.698 00:05:18.698 real 0m19.261s 00:05:18.698 user 0m41.359s 00:05:18.698 sys 0m2.671s 00:05:18.698 03:12:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.698 03:12:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.698 ************************************ 00:05:18.698 END TEST app_repeat 00:05:18.698 ************************************ 00:05:18.698 03:12:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.698 03:12:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.698 03:12:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.698 03:12:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.698 03:12:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.698 ************************************ 00:05:18.698 START TEST cpu_locks 00:05:18.698 ************************************ 00:05:18.698 03:12:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.698 * Looking for test storage... 00:05:18.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.698 03:12:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.698 03:12:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.698 03:12:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.958 03:12:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 03:12:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.958 03:12:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.958 03:12:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.958 03:12:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.958 03:12:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.958 ************************************ 00:05:18.958 START TEST default_locks 00:05:18.958 ************************************ 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58737 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58737 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58737 ']' 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.958 03:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.958 [2024-11-20 03:12:08.482119] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:18.958 [2024-11-20 03:12:08.482237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58737 ] 00:05:19.217 [2024-11-20 03:12:08.657458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.217 [2024-11-20 03:12:08.771812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.151 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.151 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:20.151 03:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58737 00:05:20.151 03:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58737 00:05:20.151 03:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58737 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58737 ']' 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58737 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58737 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.411 killing process with pid 58737 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58737' 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58737 00:05:20.411 03:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58737 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58737 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58737 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58737 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58737 ']' 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58737) - No such process 00:05:22.946 ERROR: process (pid: 58737) is no longer running 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.946 00:05:22.946 real 0m3.865s 00:05:22.946 user 0m3.821s 00:05:22.946 sys 0m0.545s 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.946 03:12:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.946 ************************************ 00:05:22.946 END TEST default_locks 00:05:22.946 ************************************ 00:05:22.946 03:12:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.946 03:12:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.946 03:12:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.946 03:12:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.946 ************************************ 00:05:22.946 START TEST default_locks_via_rpc 00:05:22.946 ************************************ 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58814 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58814 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58814 ']' 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.946 03:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.946 [2024-11-20 03:12:12.414785] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:22.946 [2024-11-20 03:12:12.414908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:05:23.205 [2024-11-20 03:12:12.580045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.205 [2024-11-20 03:12:12.693731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.143 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58814 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58814 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58814 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58814 ']' 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58814 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.144 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58814 00:05:24.403 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.403 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.403 killing process with pid 58814 00:05:24.403 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58814' 00:05:24.403 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58814 00:05:24.403 03:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58814 00:05:26.958 00:05:26.958 real 0m3.824s 00:05:26.958 user 0m3.763s 00:05:26.958 sys 0m0.552s 00:05:26.958 03:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.958 03:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.958 ************************************ 00:05:26.958 END TEST default_locks_via_rpc 00:05:26.958 ************************************ 00:05:26.958 03:12:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.958 03:12:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.958 03:12:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.958 03:12:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.958 ************************************ 00:05:26.958 START TEST non_locking_app_on_locked_coremask 00:05:26.958 ************************************ 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58883 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58883 /var/tmp/spdk.sock 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58883 ']' 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.958 03:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.958 [2024-11-20 03:12:16.307132] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:26.958 [2024-11-20 03:12:16.307262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58883 ] 00:05:26.958 [2024-11-20 03:12:16.482045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.217 [2024-11-20 03:12:16.595819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58904 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58904 /var/tmp/spdk2.sock 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58904 ']' 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.154 03:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.154 [2024-11-20 03:12:17.577136] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:28.154 [2024-11-20 03:12:17.577246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58904 ] 00:05:28.154 [2024-11-20 03:12:17.746634] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.154 [2024-11-20 03:12:17.746693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.413 [2024-11-20 03:12:17.974566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58883 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58883 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58883 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58883 ']' 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58883 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.943 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.202 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58883 00:05:31.202 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.202 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.202 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58883' 00:05:31.202 killing process with pid 58883 00:05:31.202 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58883 00:05:31.202 03:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58883 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58904 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58904 ']' 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58904 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58904 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.473 killing process with pid 58904 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58904' 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58904 00:05:36.473 03:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58904 00:05:38.376 00:05:38.376 real 0m11.456s 00:05:38.376 user 0m11.743s 00:05:38.376 sys 0m1.160s 00:05:38.376 03:12:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.376 03:12:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.376 ************************************ 00:05:38.376 END TEST non_locking_app_on_locked_coremask 00:05:38.376 ************************************ 00:05:38.376 03:12:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.376 03:12:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.376 03:12:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.376 03:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.376 ************************************ 00:05:38.376 START TEST locking_app_on_unlocked_coremask 00:05:38.376 ************************************ 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59047 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59047 /var/tmp/spdk.sock 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59047 ']' 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.376 03:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.376 [2024-11-20 03:12:27.831240] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:38.376 [2024-11-20 03:12:27.831364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59047 ] 00:05:38.376 [2024-11-20 03:12:28.005187] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.376 [2024-11-20 03:12:28.005240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.635 [2024-11-20 03:12:28.123520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59067 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59067 /var/tmp/spdk2.sock 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59067 ']' 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.581 03:12:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.581 [2024-11-20 03:12:29.065433] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:39.581 [2024-11-20 03:12:29.065570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59067 ] 00:05:39.859 [2024-11-20 03:12:29.236506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.859 [2024-11-20 03:12:29.463047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.392 03:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.392 03:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.393 03:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59067 00:05:42.393 03:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59067 00:05:42.393 03:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59047 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59047 ']' 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59047 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59047 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.651 killing process with pid 59047 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59047' 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59047 00:05:42.651 03:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59047 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59067 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59067 ']' 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59067 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59067 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.921 killing process with pid 59067 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59067' 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59067 00:05:47.921 03:12:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59067 00:05:49.824 00:05:49.824 real 0m11.468s 00:05:49.824 user 0m11.741s 00:05:49.824 sys 0m1.199s 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.824 ************************************ 00:05:49.824 END TEST locking_app_on_unlocked_coremask 00:05:49.824 ************************************ 00:05:49.824 03:12:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.824 03:12:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.824 03:12:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.824 03:12:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.824 ************************************ 00:05:49.824 START TEST locking_app_on_locked_coremask 00:05:49.824 ************************************ 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59214 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59214 /var/tmp/spdk.sock 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.824 03:12:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.824 [2024-11-20 03:12:39.364095] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:49.824 [2024-11-20 03:12:39.364211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:05:50.083 [2024-11-20 03:12:39.523856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.083 [2024-11-20 03:12:39.639328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59236 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59236 /var/tmp/spdk2.sock 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59236 /var/tmp/spdk2.sock 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59236 /var/tmp/spdk2.sock 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.020 03:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.020 [2024-11-20 03:12:40.583803] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:51.020 [2024-11-20 03:12:40.583923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59236 ] 00:05:51.279 [2024-11-20 03:12:40.752772] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59214 has claimed it. 00:05:51.279 [2024-11-20 03:12:40.752837] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.846 ERROR: process (pid: 59236) is no longer running 00:05:51.846 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59236) - No such process 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59214 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59214 00:05:51.846 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59214 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59214 ']' 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59214 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59214 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.107 killing process with pid 59214 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59214' 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59214 00:05:52.107 03:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59214 00:05:54.643 00:05:54.643 real 0m4.767s 00:05:54.643 user 0m4.942s 00:05:54.643 sys 0m0.768s 00:05:54.643 03:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.643 03:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.643 ************************************ 00:05:54.643 END TEST locking_app_on_locked_coremask 00:05:54.643 ************************************ 00:05:54.643 03:12:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.643 03:12:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.643 03:12:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.643 03:12:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.643 ************************************ 00:05:54.643 START TEST locking_overlapped_coremask 00:05:54.643 ************************************ 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59307 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59307 /var/tmp/spdk.sock 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59307 ']' 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.643 03:12:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.643 [2024-11-20 03:12:44.194823] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:54.643 [2024-11-20 03:12:44.194959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:05:54.902 [2024-11-20 03:12:44.370855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.902 [2024-11-20 03:12:44.490625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.902 [2024-11-20 03:12:44.490532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.902 [2024-11-20 03:12:44.490687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59325 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59325 /var/tmp/spdk2.sock 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59325 /var/tmp/spdk2.sock 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59325 /var/tmp/spdk2.sock 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.837 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.838 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.838 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.838 03:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.096 [2024-11-20 03:12:45.482955] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:56.096 [2024-11-20 03:12:45.483487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:05:56.096 [2024-11-20 03:12:45.659510] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59307 has claimed it. 00:05:56.096 [2024-11-20 03:12:45.659585] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.663 ERROR: process (pid: 59325) is no longer running 00:05:56.663 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59325) - No such process 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.663 03:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59307 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59307 ']' 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59307 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59307 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.664 killing process with pid 59307 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59307' 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59307 00:05:56.664 03:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59307 00:05:59.197 00:05:59.197 real 0m4.524s 00:05:59.197 user 0m12.356s 00:05:59.197 sys 0m0.591s 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.197 ************************************ 00:05:59.197 END TEST locking_overlapped_coremask 00:05:59.197 ************************************ 00:05:59.197 03:12:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.197 03:12:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.197 03:12:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.197 03:12:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.197 ************************************ 00:05:59.197 START TEST locking_overlapped_coremask_via_rpc 00:05:59.197 ************************************ 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59389 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59389 /var/tmp/spdk.sock 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.197 03:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.197 [2024-11-20 03:12:48.789276] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:59.197 [2024-11-20 03:12:48.789399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:05:59.456 [2024-11-20 03:12:48.955927] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.456 [2024-11-20 03:12:48.955982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.456 [2024-11-20 03:12:49.075234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.456 [2024-11-20 03:12:49.075373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.456 [2024-11-20 03:12:49.075410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59413 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59413 /var/tmp/spdk2.sock 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59413 ']' 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.397 03:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.657 [2024-11-20 03:12:50.085267] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:00.657 [2024-11-20 03:12:50.085429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:06:00.657 [2024-11-20 03:12:50.268571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.657 [2024-11-20 03:12:50.268639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.916 [2024-11-20 03:12:50.511845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.916 [2024-11-20 03:12:50.511979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.916 [2024-11-20 03:12:50.512011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.460 [2024-11-20 03:12:52.691807] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59389 has claimed it. 00:06:03.460 request: 00:06:03.460 { 00:06:03.460 "method": "framework_enable_cpumask_locks", 00:06:03.460 "req_id": 1 00:06:03.460 } 00:06:03.460 Got JSON-RPC error response 00:06:03.460 response: 00:06:03.460 { 00:06:03.460 "code": -32603, 00:06:03.460 "message": "Failed to claim CPU core: 2" 00:06:03.460 } 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59389 /var/tmp/spdk.sock 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59413 /var/tmp/spdk2.sock 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59413 ']' 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.460 03:12:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.720 00:06:03.720 real 0m4.486s 00:06:03.720 user 0m1.336s 00:06:03.720 sys 0m0.225s 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.720 03:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 ************************************ 00:06:03.720 END TEST locking_overlapped_coremask_via_rpc 00:06:03.720 ************************************ 00:06:03.720 03:12:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.720 03:12:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59389 ]] 00:06:03.720 03:12:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59389 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59389 ']' 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59389 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59389 00:06:03.720 killing process with pid 59389 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59389' 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59389 00:06:03.720 03:12:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59389 00:06:06.279 03:12:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59413 ]] 00:06:06.279 03:12:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59413 00:06:06.279 03:12:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59413 ']' 00:06:06.279 03:12:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59413 00:06:06.279 03:12:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.279 03:12:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.279 03:12:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59413 00:06:06.538 killing process with pid 59413 00:06:06.538 03:12:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:06.538 03:12:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:06.538 03:12:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59413' 00:06:06.538 03:12:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59413 00:06:06.538 03:12:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59413 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.070 Process with pid 59389 is not found 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59389 ]] 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59389 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59389 ']' 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59389 00:06:09.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59389) - No such process 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59389 is not found' 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59413 ]] 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59413 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59413 ']' 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59413 00:06:09.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59413) - No such process 00:06:09.070 Process with pid 59413 is not found 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59413 is not found' 00:06:09.070 03:12:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.070 00:06:09.070 real 0m50.293s 00:06:09.070 user 1m27.481s 00:06:09.070 sys 0m6.266s 00:06:09.070 ************************************ 00:06:09.070 END TEST cpu_locks 00:06:09.070 ************************************ 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.070 03:12:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.070 ************************************ 00:06:09.070 END TEST event 00:06:09.071 ************************************ 00:06:09.071 00:06:09.071 real 1m21.737s 00:06:09.071 user 2m30.407s 00:06:09.071 sys 0m10.100s 00:06:09.071 03:12:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.071 03:12:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.071 03:12:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.071 03:12:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.071 03:12:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.071 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.071 ************************************ 00:06:09.071 START TEST thread 00:06:09.071 ************************************ 00:06:09.071 03:12:58 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.071 * Looking for test storage... 00:06:09.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:09.071 03:12:58 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.071 03:12:58 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.071 03:12:58 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.331 03:12:58 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.331 03:12:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.331 03:12:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.331 03:12:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.331 03:12:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.331 03:12:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.331 03:12:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.331 03:12:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.331 03:12:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.331 03:12:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.331 03:12:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.331 03:12:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.331 03:12:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:09.331 03:12:58 thread -- scripts/common.sh@345 -- # : 1 00:06:09.331 03:12:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.331 03:12:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.331 03:12:58 thread -- scripts/common.sh@365 -- # decimal 1 00:06:09.331 03:12:58 thread -- scripts/common.sh@353 -- # local d=1 00:06:09.331 03:12:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.331 03:12:58 thread -- scripts/common.sh@355 -- # echo 1 00:06:09.331 03:12:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.331 03:12:58 thread -- scripts/common.sh@366 -- # decimal 2 00:06:09.331 03:12:58 thread -- scripts/common.sh@353 -- # local d=2 00:06:09.331 03:12:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.331 03:12:58 thread -- scripts/common.sh@355 -- # echo 2 00:06:09.331 03:12:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.331 03:12:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.332 03:12:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.332 03:12:58 thread -- scripts/common.sh@368 -- # return 0 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.332 --rc genhtml_branch_coverage=1 00:06:09.332 --rc genhtml_function_coverage=1 00:06:09.332 --rc genhtml_legend=1 00:06:09.332 --rc geninfo_all_blocks=1 00:06:09.332 --rc geninfo_unexecuted_blocks=1 00:06:09.332 00:06:09.332 ' 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.332 --rc genhtml_branch_coverage=1 00:06:09.332 --rc genhtml_function_coverage=1 00:06:09.332 --rc genhtml_legend=1 00:06:09.332 --rc geninfo_all_blocks=1 00:06:09.332 --rc geninfo_unexecuted_blocks=1 00:06:09.332 00:06:09.332 ' 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.332 --rc genhtml_branch_coverage=1 00:06:09.332 --rc genhtml_function_coverage=1 00:06:09.332 --rc genhtml_legend=1 00:06:09.332 --rc geninfo_all_blocks=1 00:06:09.332 --rc geninfo_unexecuted_blocks=1 00:06:09.332 00:06:09.332 ' 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.332 --rc genhtml_branch_coverage=1 00:06:09.332 --rc genhtml_function_coverage=1 00:06:09.332 --rc genhtml_legend=1 00:06:09.332 --rc geninfo_all_blocks=1 00:06:09.332 --rc geninfo_unexecuted_blocks=1 00:06:09.332 00:06:09.332 ' 00:06:09.332 03:12:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.332 03:12:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.332 ************************************ 00:06:09.332 START TEST thread_poller_perf 00:06:09.332 ************************************ 00:06:09.332 03:12:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.332 [2024-11-20 03:12:58.838431] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:09.332 [2024-11-20 03:12:58.838624] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59613 ] 00:06:09.591 [2024-11-20 03:12:59.013394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.591 [2024-11-20 03:12:59.128169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.591 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.966 [2024-11-20T03:13:00.601Z] ====================================== 00:06:10.966 [2024-11-20T03:13:00.601Z] busy:2300158390 (cyc) 00:06:10.966 [2024-11-20T03:13:00.601Z] total_run_count: 379000 00:06:10.966 [2024-11-20T03:13:00.601Z] tsc_hz: 2290000000 (cyc) 00:06:10.966 [2024-11-20T03:13:00.601Z] ====================================== 00:06:10.966 [2024-11-20T03:13:00.601Z] poller_cost: 6069 (cyc), 2650 (nsec) 00:06:10.966 00:06:10.966 real 0m1.568s 00:06:10.966 user 0m1.369s 00:06:10.966 sys 0m0.091s 00:06:10.966 03:13:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.966 03:13:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.966 ************************************ 00:06:10.966 END TEST thread_poller_perf 00:06:10.966 ************************************ 00:06:10.966 03:13:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.966 03:13:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:10.966 03:13:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.966 03:13:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.966 ************************************ 00:06:10.966 START TEST thread_poller_perf 00:06:10.966 ************************************ 00:06:10.966 03:13:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.966 [2024-11-20 03:13:00.486452] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:10.967 [2024-11-20 03:13:00.486697] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:06:11.225 [2024-11-20 03:13:00.663448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.225 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:11.225 [2024-11-20 03:13:00.779152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.604 [2024-11-20T03:13:02.239Z] ====================================== 00:06:12.604 [2024-11-20T03:13:02.239Z] busy:2293460224 (cyc) 00:06:12.604 [2024-11-20T03:13:02.239Z] total_run_count: 4836000 00:06:12.604 [2024-11-20T03:13:02.239Z] tsc_hz: 2290000000 (cyc) 00:06:12.604 [2024-11-20T03:13:02.239Z] ====================================== 00:06:12.604 [2024-11-20T03:13:02.239Z] poller_cost: 474 (cyc), 206 (nsec) 00:06:12.604 ************************************ 00:06:12.604 END TEST thread_poller_perf 00:06:12.604 00:06:12.604 real 0m1.578s 00:06:12.604 user 0m1.380s 00:06:12.604 sys 0m0.091s 00:06:12.604 03:13:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.604 03:13:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.604 ************************************ 00:06:12.604 03:13:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.604 ************************************ 00:06:12.604 END TEST thread 00:06:12.604 ************************************ 00:06:12.604 00:06:12.604 real 0m3.500s 00:06:12.604 user 0m2.928s 00:06:12.604 sys 0m0.370s 00:06:12.604 03:13:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.604 03:13:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.604 03:13:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:12.604 03:13:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.604 03:13:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.604 03:13:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.604 03:13:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.604 ************************************ 00:06:12.604 START TEST app_cmdline 00:06:12.604 ************************************ 00:06:12.604 03:13:02 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.604 * Looking for test storage... 00:06:12.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.863 03:13:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.863 --rc genhtml_branch_coverage=1 00:06:12.863 --rc genhtml_function_coverage=1 00:06:12.863 --rc genhtml_legend=1 00:06:12.863 --rc geninfo_all_blocks=1 00:06:12.863 --rc geninfo_unexecuted_blocks=1 00:06:12.863 00:06:12.863 ' 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.863 --rc genhtml_branch_coverage=1 00:06:12.863 --rc genhtml_function_coverage=1 00:06:12.863 --rc genhtml_legend=1 00:06:12.863 --rc geninfo_all_blocks=1 00:06:12.863 --rc geninfo_unexecuted_blocks=1 00:06:12.863 00:06:12.863 ' 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.863 --rc genhtml_branch_coverage=1 00:06:12.863 --rc genhtml_function_coverage=1 00:06:12.863 --rc genhtml_legend=1 00:06:12.863 --rc geninfo_all_blocks=1 00:06:12.863 --rc geninfo_unexecuted_blocks=1 00:06:12.863 00:06:12.863 ' 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.863 --rc genhtml_branch_coverage=1 00:06:12.863 --rc genhtml_function_coverage=1 00:06:12.863 --rc genhtml_legend=1 00:06:12.863 --rc geninfo_all_blocks=1 00:06:12.863 --rc geninfo_unexecuted_blocks=1 00:06:12.863 00:06:12.863 ' 00:06:12.863 03:13:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.863 03:13:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59739 00:06:12.863 03:13:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.863 03:13:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59739 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59739 ']' 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.863 03:13:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.863 [2024-11-20 03:13:02.405263] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:12.863 [2024-11-20 03:13:02.405459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:06:13.122 [2024-11-20 03:13:02.560637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.122 [2024-11-20 03:13:02.679674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.060 03:13:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.060 03:13:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:14.060 03:13:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:14.319 { 00:06:14.319 "version": "SPDK v25.01-pre git sha1 f22e807f1", 00:06:14.319 "fields": { 00:06:14.319 "major": 25, 00:06:14.319 "minor": 1, 00:06:14.319 "patch": 0, 00:06:14.319 "suffix": "-pre", 00:06:14.319 "commit": "f22e807f1" 00:06:14.319 } 00:06:14.319 } 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.319 03:13:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:14.319 03:13:03 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.578 request: 00:06:14.578 { 00:06:14.578 "method": "env_dpdk_get_mem_stats", 00:06:14.578 "req_id": 1 00:06:14.578 } 00:06:14.578 Got JSON-RPC error response 00:06:14.578 response: 00:06:14.578 { 00:06:14.578 "code": -32601, 00:06:14.578 "message": "Method not found" 00:06:14.578 } 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.579 03:13:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59739 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59739 ']' 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59739 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59739 00:06:14.579 killing process with pid 59739 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59739' 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 59739 00:06:14.579 03:13:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 59739 00:06:17.154 00:06:17.154 real 0m4.333s 00:06:17.154 user 0m4.607s 00:06:17.154 sys 0m0.554s 00:06:17.154 ************************************ 00:06:17.154 END TEST app_cmdline 00:06:17.154 ************************************ 00:06:17.154 03:13:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.154 03:13:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 03:13:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.154 03:13:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.154 03:13:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.154 03:13:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 ************************************ 00:06:17.154 START TEST version 00:06:17.154 ************************************ 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.154 * Looking for test storage... 00:06:17.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.154 03:13:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.154 03:13:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.154 03:13:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.154 03:13:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.154 03:13:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.154 03:13:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.154 03:13:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.154 03:13:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.154 03:13:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.154 03:13:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.154 03:13:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.154 03:13:06 version -- scripts/common.sh@344 -- # case "$op" in 00:06:17.154 03:13:06 version -- scripts/common.sh@345 -- # : 1 00:06:17.154 03:13:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.154 03:13:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.154 03:13:06 version -- scripts/common.sh@365 -- # decimal 1 00:06:17.154 03:13:06 version -- scripts/common.sh@353 -- # local d=1 00:06:17.154 03:13:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.154 03:13:06 version -- scripts/common.sh@355 -- # echo 1 00:06:17.154 03:13:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.154 03:13:06 version -- scripts/common.sh@366 -- # decimal 2 00:06:17.154 03:13:06 version -- scripts/common.sh@353 -- # local d=2 00:06:17.154 03:13:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.154 03:13:06 version -- scripts/common.sh@355 -- # echo 2 00:06:17.154 03:13:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.154 03:13:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.154 03:13:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.154 03:13:06 version -- scripts/common.sh@368 -- # return 0 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.154 --rc genhtml_branch_coverage=1 00:06:17.154 --rc genhtml_function_coverage=1 00:06:17.154 --rc genhtml_legend=1 00:06:17.154 --rc geninfo_all_blocks=1 00:06:17.154 --rc geninfo_unexecuted_blocks=1 00:06:17.154 00:06:17.154 ' 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.154 --rc genhtml_branch_coverage=1 00:06:17.154 --rc genhtml_function_coverage=1 00:06:17.154 --rc genhtml_legend=1 00:06:17.154 --rc geninfo_all_blocks=1 00:06:17.154 --rc geninfo_unexecuted_blocks=1 00:06:17.154 00:06:17.154 ' 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.154 --rc genhtml_branch_coverage=1 00:06:17.154 --rc genhtml_function_coverage=1 00:06:17.154 --rc genhtml_legend=1 00:06:17.154 --rc geninfo_all_blocks=1 00:06:17.154 --rc geninfo_unexecuted_blocks=1 00:06:17.154 00:06:17.154 ' 00:06:17.154 03:13:06 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.154 --rc genhtml_branch_coverage=1 00:06:17.154 --rc genhtml_function_coverage=1 00:06:17.154 --rc genhtml_legend=1 00:06:17.154 --rc geninfo_all_blocks=1 00:06:17.154 --rc geninfo_unexecuted_blocks=1 00:06:17.154 00:06:17.154 ' 00:06:17.154 03:13:06 version -- app/version.sh@17 -- # get_header_version major 00:06:17.154 03:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # cut -f2 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.154 03:13:06 version -- app/version.sh@17 -- # major=25 00:06:17.154 03:13:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.154 03:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # cut -f2 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.154 03:13:06 version -- app/version.sh@18 -- # minor=1 00:06:17.154 03:13:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:17.154 03:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # cut -f2 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.154 03:13:06 version -- app/version.sh@19 -- # patch=0 00:06:17.154 03:13:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:17.154 03:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # cut -f2 00:06:17.154 03:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.154 03:13:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:17.154 03:13:06 version -- app/version.sh@22 -- # version=25.1 00:06:17.154 03:13:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:17.154 03:13:06 version -- app/version.sh@28 -- # version=25.1rc0 00:06:17.154 03:13:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:17.154 03:13:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:17.413 03:13:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:17.413 03:13:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:17.413 00:06:17.413 real 0m0.316s 00:06:17.413 user 0m0.182s 00:06:17.413 sys 0m0.187s 00:06:17.413 ************************************ 00:06:17.413 END TEST version 00:06:17.413 ************************************ 00:06:17.413 03:13:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.413 03:13:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:17.413 03:13:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:17.413 03:13:06 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:17.413 03:13:06 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:17.413 03:13:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.413 03:13:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.413 03:13:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.413 ************************************ 00:06:17.413 START TEST bdev_raid 00:06:17.413 ************************************ 00:06:17.413 03:13:06 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:17.413 * Looking for test storage... 00:06:17.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:17.413 03:13:07 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.413 03:13:07 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.413 03:13:07 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.672 03:13:07 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.672 --rc genhtml_branch_coverage=1 00:06:17.672 --rc genhtml_function_coverage=1 00:06:17.672 --rc genhtml_legend=1 00:06:17.672 --rc geninfo_all_blocks=1 00:06:17.672 --rc geninfo_unexecuted_blocks=1 00:06:17.672 00:06:17.672 ' 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.672 --rc genhtml_branch_coverage=1 00:06:17.672 --rc genhtml_function_coverage=1 00:06:17.672 --rc genhtml_legend=1 00:06:17.672 --rc geninfo_all_blocks=1 00:06:17.672 --rc geninfo_unexecuted_blocks=1 00:06:17.672 00:06:17.672 ' 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.672 --rc genhtml_branch_coverage=1 00:06:17.672 --rc genhtml_function_coverage=1 00:06:17.672 --rc genhtml_legend=1 00:06:17.672 --rc geninfo_all_blocks=1 00:06:17.672 --rc geninfo_unexecuted_blocks=1 00:06:17.672 00:06:17.672 ' 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.672 --rc genhtml_branch_coverage=1 00:06:17.672 --rc genhtml_function_coverage=1 00:06:17.672 --rc genhtml_legend=1 00:06:17.672 --rc geninfo_all_blocks=1 00:06:17.672 --rc geninfo_unexecuted_blocks=1 00:06:17.672 00:06:17.672 ' 00:06:17.672 03:13:07 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.672 03:13:07 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.672 03:13:07 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:17.672 03:13:07 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:17.672 03:13:07 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:17.672 03:13:07 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:17.672 03:13:07 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.672 03:13:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.672 ************************************ 00:06:17.672 START TEST raid1_resize_data_offset_test 00:06:17.672 ************************************ 00:06:17.672 Process raid pid: 59921 00:06:17.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59921 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59921' 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59921 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59921 ']' 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.672 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.673 03:13:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.673 [2024-11-20 03:13:07.223397] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:17.673 [2024-11-20 03:13:07.223604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.931 [2024-11-20 03:13:07.384600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.931 [2024-11-20 03:13:07.500985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.189 [2024-11-20 03:13:07.714825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.189 [2024-11-20 03:13:07.714960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 malloc0 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 malloc1 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 null0 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.757 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 [2024-11-20 03:13:08.289267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:18.757 [2024-11-20 03:13:08.291580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:18.757 [2024-11-20 03:13:08.291712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:18.757 [2024-11-20 03:13:08.291956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:18.757 [2024-11-20 03:13:08.292018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:18.758 [2024-11-20 03:13:08.292398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:18.758 [2024-11-20 03:13:08.292643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:18.758 [2024-11-20 03:13:08.292693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:18.758 [2024-11-20 03:13:08.292923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 [2024-11-20 03:13:08.349150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.758 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.325 malloc2 00:06:19.325 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.325 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:19.325 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.325 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.325 [2024-11-20 03:13:08.903984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:19.325 [2024-11-20 03:13:08.920642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:19.325 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.326 [2024-11-20 03:13:08.922488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:19.326 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.326 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.326 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.326 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:19.326 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59921 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59921 ']' 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59921 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.585 03:13:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59921 00:06:19.585 killing process with pid 59921 00:06:19.585 03:13:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.585 03:13:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.585 03:13:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59921' 00:06:19.585 03:13:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59921 00:06:19.585 03:13:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59921 00:06:19.585 [2024-11-20 03:13:09.014914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:19.585 [2024-11-20 03:13:09.016626] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:19.585 [2024-11-20 03:13:09.016686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.585 [2024-11-20 03:13:09.016702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:19.585 [2024-11-20 03:13:09.053453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.585 [2024-11-20 03:13:09.053811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:19.585 [2024-11-20 03:13:09.053831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:21.488 [2024-11-20 03:13:10.855976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:22.426 03:13:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:22.426 00:06:22.426 real 0m4.826s 00:06:22.426 user 0m4.778s 00:06:22.426 sys 0m0.524s 00:06:22.426 03:13:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.426 03:13:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.426 ************************************ 00:06:22.426 END TEST raid1_resize_data_offset_test 00:06:22.426 ************************************ 00:06:22.426 03:13:12 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:22.426 03:13:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.426 03:13:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.426 03:13:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:22.426 ************************************ 00:06:22.426 START TEST raid0_resize_superblock_test 00:06:22.426 ************************************ 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60010 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60010' 00:06:22.426 Process raid pid: 60010 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60010 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60010 ']' 00:06:22.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.426 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.685 [2024-11-20 03:13:12.117568] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:22.685 [2024-11-20 03:13:12.117825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.685 [2024-11-20 03:13:12.275162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.943 [2024-11-20 03:13:12.390000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.201 [2024-11-20 03:13:12.595862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.201 [2024-11-20 03:13:12.595961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.460 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.460 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:23.460 03:13:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:23.460 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.460 03:13:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.027 malloc0 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.027 [2024-11-20 03:13:13.509431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:24.027 [2024-11-20 03:13:13.509501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.027 [2024-11-20 03:13:13.509526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:24.027 [2024-11-20 03:13:13.509537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.027 [2024-11-20 03:13:13.511755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.027 [2024-11-20 03:13:13.511846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:24.027 pt0 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.027 2c5b9b7d-3f98-413a-a2ed-ba3cd1d09a2c 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.027 af68241f-3a18-47eb-bac2-0a28a09d8960 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.027 829ce106-c613-4205-8bc2-1a3146ea68dd 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.027 [2024-11-20 03:13:13.642552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev af68241f-3a18-47eb-bac2-0a28a09d8960 is claimed 00:06:24.027 [2024-11-20 03:13:13.642701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 829ce106-c613-4205-8bc2-1a3146ea68dd is claimed 00:06:24.027 [2024-11-20 03:13:13.642908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:24.027 [2024-11-20 03:13:13.642932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:24.027 [2024-11-20 03:13:13.643238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:24.027 [2024-11-20 03:13:13.643469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:24.027 [2024-11-20 03:13:13.643483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:24.027 [2024-11-20 03:13:13.643690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.027 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:24.286 [2024-11-20 03:13:13.754563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 [2024-11-20 03:13:13.806466] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:24.286 [2024-11-20 03:13:13.806549] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'af68241f-3a18-47eb-bac2-0a28a09d8960' was resized: old size 131072, new size 204800 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 [2024-11-20 03:13:13.818319] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:24.286 [2024-11-20 03:13:13.818345] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '829ce106-c613-4205-8bc2-1a3146ea68dd' was resized: old size 131072, new size 204800 00:06:24.286 [2024-11-20 03:13:13.818374] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.286 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.546 [2024-11-20 03:13:13.934237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.546 [2024-11-20 03:13:13.981952] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:24.546 [2024-11-20 03:13:13.982036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:24.546 [2024-11-20 03:13:13.982048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:24.546 [2024-11-20 03:13:13.982065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:24.546 [2024-11-20 03:13:13.982179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.546 [2024-11-20 03:13:13.982214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.546 [2024-11-20 03:13:13.982226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.546 [2024-11-20 03:13:13.989847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:24.546 [2024-11-20 03:13:13.989914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.546 [2024-11-20 03:13:13.989936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:24.546 [2024-11-20 03:13:13.989947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.546 [2024-11-20 03:13:13.992322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.546 [2024-11-20 03:13:13.992368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:24.546 [2024-11-20 03:13:13.994071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev af68241f-3a18-47eb-bac2-0a28a09d8960 00:06:24.546 [2024-11-20 03:13:13.994132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev af68241f-3a18-47eb-bac2-0a28a09d8960 is claimed 00:06:24.546 [2024-11-20 03:13:13.994263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 829ce106-c613-4205-8bc2-1a3146ea68dd 00:06:24.546 [2024-11-20 03:13:13.994283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 829ce106-c613-4205-8bc2-1a3146ea68dd is claimed 00:06:24.546 [2024-11-20 03:13:13.994445] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 829ce106-c613-4205-8bc2-1a3146ea68dd (2) smaller than existing raid bdev Raid (3) 00:06:24.546 [2024-11-20 03:13:13.994470] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev af68241f-3a18-47eb-bac2-0a28a09d8960: File exists 00:06:24.546 [2024-11-20 03:13:13.994505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:24.546 [2024-11-20 03:13:13.994533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:24.546 pt0 00:06:24.546 [2024-11-20 03:13:13.994824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:24.546 [2024-11-20 03:13:13.995001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:24.546 [2024-11-20 03:13:13.995011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:24.546 [2024-11-20 03:13:13.995186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.546 03:13:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.546 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.546 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.547 [2024-11-20 03:13:14.010339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60010 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60010 ']' 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60010 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60010 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60010' 00:06:24.547 killing process with pid 60010 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60010 00:06:24.547 [2024-11-20 03:13:14.102795] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:24.547 [2024-11-20 03:13:14.102948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.547 03:13:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60010 00:06:24.547 [2024-11-20 03:13:14.103042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.547 [2024-11-20 03:13:14.103054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:25.923 [2024-11-20 03:13:15.555530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.319 03:13:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:27.319 00:06:27.319 real 0m4.635s 00:06:27.319 user 0m4.907s 00:06:27.319 sys 0m0.532s 00:06:27.319 03:13:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.319 03:13:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.319 ************************************ 00:06:27.319 END TEST raid0_resize_superblock_test 00:06:27.319 ************************************ 00:06:27.319 03:13:16 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:27.319 03:13:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.319 03:13:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.319 03:13:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.319 ************************************ 00:06:27.319 START TEST raid1_resize_superblock_test 00:06:27.319 ************************************ 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60110 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.319 Process raid pid: 60110 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60110' 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60110 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60110 ']' 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.319 03:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.319 [2024-11-20 03:13:16.816195] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:27.319 [2024-11-20 03:13:16.816314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.578 [2024-11-20 03:13:16.994637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.578 [2024-11-20 03:13:17.110983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.837 [2024-11-20 03:13:17.304253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.837 [2024-11-20 03:13:17.304292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.096 03:13:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.096 03:13:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:28.096 03:13:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:28.096 03:13:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.096 03:13:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.664 malloc0 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.664 [2024-11-20 03:13:18.212397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:28.664 [2024-11-20 03:13:18.212467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.664 [2024-11-20 03:13:18.212493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:28.664 [2024-11-20 03:13:18.212504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.664 [2024-11-20 03:13:18.214590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.664 [2024-11-20 03:13:18.214641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:28.664 pt0 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.664 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 8a9f647a-35da-4291-9fca-a288cd917d3e 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 cfc8492a-cf2f-4592-bdb6-bb400276f6b0 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 ae481cac-0c9e-47d0-b7e6-c97759e04b91 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 [2024-11-20 03:13:18.344766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cfc8492a-cf2f-4592-bdb6-bb400276f6b0 is claimed 00:06:28.923 [2024-11-20 03:13:18.344903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ae481cac-0c9e-47d0-b7e6-c97759e04b91 is claimed 00:06:28.923 [2024-11-20 03:13:18.345031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:28.923 [2024-11-20 03:13:18.345048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:28.923 [2024-11-20 03:13:18.345301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:28.923 [2024-11-20 03:13:18.345483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:28.923 [2024-11-20 03:13:18.345494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:28.923 [2024-11-20 03:13:18.345678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 [2024-11-20 03:13:18.456869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 [2024-11-20 03:13:18.504796] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:28.923 [2024-11-20 03:13:18.504877] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cfc8492a-cf2f-4592-bdb6-bb400276f6b0' was resized: old size 131072, new size 204800 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.923 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 [2024-11-20 03:13:18.516634] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:28.924 [2024-11-20 03:13:18.516694] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ae481cac-0c9e-47d0-b7e6-c97759e04b91' was resized: old size 131072, new size 204800 00:06:28.924 [2024-11-20 03:13:18.516769] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:28.924 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.924 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:28.924 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.924 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.924 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:28.924 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.183 [2024-11-20 03:13:18.628572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.183 [2024-11-20 03:13:18.676293] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:29.183 [2024-11-20 03:13:18.676374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:29.183 [2024-11-20 03:13:18.676403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:29.183 [2024-11-20 03:13:18.676562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:29.183 [2024-11-20 03:13:18.676789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.183 [2024-11-20 03:13:18.676862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.183 [2024-11-20 03:13:18.676882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.183 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.183 [2024-11-20 03:13:18.688145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:29.183 [2024-11-20 03:13:18.688244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.183 [2024-11-20 03:13:18.688282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:29.183 [2024-11-20 03:13:18.688319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.183 [2024-11-20 03:13:18.690493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.183 [2024-11-20 03:13:18.690565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:29.183 [2024-11-20 03:13:18.692303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cfc8492a-cf2f-4592-bdb6-bb400276f6b0 00:06:29.184 [2024-11-20 03:13:18.692441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cfc8492a-cf2f-4592-bdb6-bb400276f6b0 is claimed 00:06:29.184 [2024-11-20 03:13:18.692629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ae481cac-0c9e-47d0-b7e6-c97759e04b91 00:06:29.184 [2024-11-20 03:13:18.692696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ae481cac-0c9e-47d0-b7e6-c97759e04b91 is claimed 00:06:29.184 [2024-11-20 03:13:18.692904] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ae481cac-0c9e-47d0-b7e6-c97759e04b91 (2) smaller than existing raid bdev Raid (3) 00:06:29.184 [2024-11-20 03:13:18.692969] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cfc8492a-cf2f-4592-bdb6-bb400276f6b0: File exists 00:06:29.184 [2024-11-20 03:13:18.693040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:29.184 [2024-11-20 03:13:18.693074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:29.184 [2024-11-20 03:13:18.693334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:29.184 pt0 00:06:29.184 [2024-11-20 03:13:18.693532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:29.184 [2024-11-20 03:13:18.693542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:29.184 [2024-11-20 03:13:18.693719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.184 [2024-11-20 03:13:18.716565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60110 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60110 ']' 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60110 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60110 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60110' 00:06:29.184 killing process with pid 60110 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60110 00:06:29.184 [2024-11-20 03:13:18.787422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.184 [2024-11-20 03:13:18.787556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.184 03:13:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60110 00:06:29.184 [2024-11-20 03:13:18.787651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.184 [2024-11-20 03:13:18.787663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:31.103 [2024-11-20 03:13:20.216100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.039 03:13:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:32.040 00:06:32.040 real 0m4.584s 00:06:32.040 user 0m4.825s 00:06:32.040 sys 0m0.551s 00:06:32.040 03:13:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.040 ************************************ 00:06:32.040 END TEST raid1_resize_superblock_test 00:06:32.040 ************************************ 00:06:32.040 03:13:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.040 03:13:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:32.040 03:13:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:32.040 03:13:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:32.040 03:13:21 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:32.040 03:13:21 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:32.040 03:13:21 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:32.040 03:13:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:32.040 03:13:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.040 03:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.040 ************************************ 00:06:32.040 START TEST raid_function_test_raid0 00:06:32.040 ************************************ 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:32.040 Process raid pid: 60212 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60212 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60212' 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60212 00:06:32.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60212 ']' 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.040 03:13:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.040 [2024-11-20 03:13:21.487158] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:32.040 [2024-11-20 03:13:21.487374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.040 [2024-11-20 03:13:21.659576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.299 [2024-11-20 03:13:21.774287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.558 [2024-11-20 03:13:21.977656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.558 [2024-11-20 03:13:21.977698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 Base_1 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 Base_2 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 [2024-11-20 03:13:22.439982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:32.817 [2024-11-20 03:13:22.441749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:32.817 [2024-11-20 03:13:22.441817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.817 [2024-11-20 03:13:22.441830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:32.817 [2024-11-20 03:13:22.442090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:32.817 [2024-11-20 03:13:22.442227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.817 [2024-11-20 03:13:22.442236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:32.817 [2024-11-20 03:13:22.442386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.817 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:33.075 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:33.075 [2024-11-20 03:13:22.683654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:33.075 /dev/nbd0 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.334 1+0 records in 00:06:33.334 1+0 records out 00:06:33.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356843 s, 11.5 MB/s 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.334 { 00:06:33.334 "nbd_device": "/dev/nbd0", 00:06:33.334 "bdev_name": "raid" 00:06:33.334 } 00:06:33.334 ]' 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.334 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.334 { 00:06:33.334 "nbd_device": "/dev/nbd0", 00:06:33.334 "bdev_name": "raid" 00:06:33.334 } 00:06:33.334 ]' 00:06:33.592 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:33.592 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:33.592 03:13:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:33.592 4096+0 records in 00:06:33.592 4096+0 records out 00:06:33.592 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0356463 s, 58.8 MB/s 00:06:33.592 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:33.850 4096+0 records in 00:06:33.850 4096+0 records out 00:06:33.850 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.223369 s, 9.4 MB/s 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:33.850 128+0 records in 00:06:33.850 128+0 records out 00:06:33.850 65536 bytes (66 kB, 64 KiB) copied, 0.00134562 s, 48.7 MB/s 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.850 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:33.851 2035+0 records in 00:06:33.851 2035+0 records out 00:06:33.851 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143512 s, 72.6 MB/s 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:33.851 456+0 records in 00:06:33.851 456+0 records out 00:06:33.851 233472 bytes (233 kB, 228 KiB) copied, 0.00401966 s, 58.1 MB/s 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.851 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.110 [2024-11-20 03:13:23.619900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:34.110 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60212 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60212 ']' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60212 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60212 00:06:34.369 killing process with pid 60212 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60212' 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60212 00:06:34.369 [2024-11-20 03:13:23.951812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.369 [2024-11-20 03:13:23.951931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.369 03:13:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60212 00:06:34.369 [2024-11-20 03:13:23.951978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.369 [2024-11-20 03:13:23.951993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:34.627 [2024-11-20 03:13:24.162810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.007 03:13:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:36.007 00:06:36.007 real 0m3.855s 00:06:36.007 user 0m4.470s 00:06:36.007 sys 0m0.952s 00:06:36.007 03:13:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.007 03:13:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:36.007 ************************************ 00:06:36.007 END TEST raid_function_test_raid0 00:06:36.007 ************************************ 00:06:36.007 03:13:25 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:36.007 03:13:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.007 03:13:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.007 03:13:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.007 ************************************ 00:06:36.007 START TEST raid_function_test_concat 00:06:36.007 ************************************ 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60339 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60339' 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.007 Process raid pid: 60339 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60339 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60339 ']' 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.007 03:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.007 [2024-11-20 03:13:25.408633] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:36.007 [2024-11-20 03:13:25.408751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.007 [2024-11-20 03:13:25.583214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.270 [2024-11-20 03:13:25.697112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.270 [2024-11-20 03:13:25.901852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.270 [2024-11-20 03:13:25.901900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.838 Base_1 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.838 Base_2 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.838 [2024-11-20 03:13:26.329069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:36.838 [2024-11-20 03:13:26.330944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:36.838 [2024-11-20 03:13:26.331027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:36.838 [2024-11-20 03:13:26.331040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:36.838 [2024-11-20 03:13:26.331328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:36.838 [2024-11-20 03:13:26.331526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:36.838 [2024-11-20 03:13:26.331543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:36.838 [2024-11-20 03:13:26.331730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.838 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:37.097 [2024-11-20 03:13:26.560753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:37.097 /dev/nbd0 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.097 1+0 records in 00:06:37.097 1+0 records out 00:06:37.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042849 s, 9.6 MB/s 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.097 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.357 { 00:06:37.357 "nbd_device": "/dev/nbd0", 00:06:37.357 "bdev_name": "raid" 00:06:37.357 } 00:06:37.357 ]' 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.357 { 00:06:37.357 "nbd_device": "/dev/nbd0", 00:06:37.357 "bdev_name": "raid" 00:06:37.357 } 00:06:37.357 ]' 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:37.357 4096+0 records in 00:06:37.357 4096+0 records out 00:06:37.357 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0346089 s, 60.6 MB/s 00:06:37.357 03:13:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:37.616 4096+0 records in 00:06:37.616 4096+0 records out 00:06:37.616 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.18967 s, 11.1 MB/s 00:06:37.616 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:37.617 128+0 records in 00:06:37.617 128+0 records out 00:06:37.617 65536 bytes (66 kB, 64 KiB) copied, 0.00116853 s, 56.1 MB/s 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:37.617 2035+0 records in 00:06:37.617 2035+0 records out 00:06:37.617 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0136913 s, 76.1 MB/s 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:37.617 456+0 records in 00:06:37.617 456+0 records out 00:06:37.617 233472 bytes (233 kB, 228 KiB) copied, 0.00273397 s, 85.4 MB/s 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.617 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.876 [2024-11-20 03:13:27.469191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.876 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60339 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60339 ']' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60339 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.135 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60339 00:06:38.394 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.394 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.394 killing process with pid 60339 00:06:38.394 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60339' 00:06:38.394 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60339 00:06:38.394 [2024-11-20 03:13:27.768774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.394 [2024-11-20 03:13:27.768885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.394 03:13:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60339 00:06:38.394 [2024-11-20 03:13:27.768947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.394 [2024-11-20 03:13:27.768961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:38.394 [2024-11-20 03:13:27.976041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.772 03:13:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:39.772 00:06:39.772 real 0m3.755s 00:06:39.772 user 0m4.313s 00:06:39.772 sys 0m0.970s 00:06:39.772 03:13:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.772 03:13:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:39.772 ************************************ 00:06:39.772 END TEST raid_function_test_concat 00:06:39.772 ************************************ 00:06:39.772 03:13:29 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:39.772 03:13:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.772 03:13:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.772 03:13:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.772 ************************************ 00:06:39.772 START TEST raid0_resize_test 00:06:39.772 ************************************ 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60466 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60466' 00:06:39.772 Process raid pid: 60466 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60466 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60466 ']' 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.772 03:13:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.772 [2024-11-20 03:13:29.233150] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:39.772 [2024-11-20 03:13:29.233275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.772 [2024-11-20 03:13:29.392120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.030 [2024-11-20 03:13:29.508120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.289 [2024-11-20 03:13:29.715340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.289 [2024-11-20 03:13:29.715381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 Base_1 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 Base_2 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 [2024-11-20 03:13:30.085605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:40.548 [2024-11-20 03:13:30.087458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:40.548 [2024-11-20 03:13:30.087520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:40.548 [2024-11-20 03:13:30.087532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:40.548 [2024-11-20 03:13:30.087799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:40.548 [2024-11-20 03:13:30.087942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:40.548 [2024-11-20 03:13:30.087958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:40.548 [2024-11-20 03:13:30.088115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 [2024-11-20 03:13:30.093579] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.548 [2024-11-20 03:13:30.093622] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:40.548 true 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:40.548 [2024-11-20 03:13:30.105772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.548 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 [2024-11-20 03:13:30.153486] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.548 [2024-11-20 03:13:30.153519] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:40.548 [2024-11-20 03:13:30.153566] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:40.549 true 00:06:40.549 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.549 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.549 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:40.549 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.549 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 [2024-11-20 03:13:30.169653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60466 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60466 ']' 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60466 00:06:40.807 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60466 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.808 killing process with pid 60466 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60466' 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60466 00:06:40.808 [2024-11-20 03:13:30.234033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.808 03:13:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60466 00:06:40.808 [2024-11-20 03:13:30.234134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.808 [2024-11-20 03:13:30.234188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.808 [2024-11-20 03:13:30.234198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:40.808 [2024-11-20 03:13:30.251443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.744 03:13:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:41.744 00:06:41.744 real 0m2.197s 00:06:41.744 user 0m2.340s 00:06:41.744 sys 0m0.314s 00:06:41.744 03:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.744 03:13:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.744 ************************************ 00:06:41.744 END TEST raid0_resize_test 00:06:41.744 ************************************ 00:06:42.003 03:13:31 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:42.003 03:13:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.003 03:13:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.003 03:13:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.003 ************************************ 00:06:42.003 START TEST raid1_resize_test 00:06:42.003 ************************************ 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60522 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60522' 00:06:42.003 Process raid pid: 60522 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60522 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60522 ']' 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.003 03:13:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.003 [2024-11-20 03:13:31.491970] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:42.003 [2024-11-20 03:13:31.492093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.261 [2024-11-20 03:13:31.668478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.261 [2024-11-20 03:13:31.783086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.520 [2024-11-20 03:13:31.984091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.520 [2024-11-20 03:13:31.984127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 Base_1 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 Base_2 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 [2024-11-20 03:13:32.360235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:42.780 [2024-11-20 03:13:32.361964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:42.780 [2024-11-20 03:13:32.362029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.780 [2024-11-20 03:13:32.362040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:42.780 [2024-11-20 03:13:32.362293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:42.780 [2024-11-20 03:13:32.362478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.780 [2024-11-20 03:13:32.362497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.780 [2024-11-20 03:13:32.362672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 [2024-11-20 03:13:32.372198] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.780 [2024-11-20 03:13:32.372230] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:42.780 true 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:42.780 [2024-11-20 03:13:32.384340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.780 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.040 [2024-11-20 03:13:32.432096] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.040 [2024-11-20 03:13:32.432123] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:43.040 [2024-11-20 03:13:32.432153] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:43.040 true 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:43.040 [2024-11-20 03:13:32.444250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60522 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60522 ']' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60522 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60522 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.040 killing process with pid 60522 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60522' 00:06:43.040 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60522 00:06:43.040 [2024-11-20 03:13:32.530884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.040 [2024-11-20 03:13:32.530993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.041 03:13:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60522 00:06:43.041 [2024-11-20 03:13:32.531496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.041 [2024-11-20 03:13:32.531527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.041 [2024-11-20 03:13:32.548944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:44.418 03:13:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:44.418 00:06:44.418 real 0m2.244s 00:06:44.418 user 0m2.392s 00:06:44.418 sys 0m0.329s 00:06:44.418 03:13:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.418 03:13:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.418 ************************************ 00:06:44.418 END TEST raid1_resize_test 00:06:44.418 ************************************ 00:06:44.418 03:13:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:44.418 03:13:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:44.419 03:13:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:44.419 03:13:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:44.419 03:13:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.419 03:13:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.419 ************************************ 00:06:44.419 START TEST raid_state_function_test 00:06:44.419 ************************************ 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60579 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:44.419 Process raid pid: 60579 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60579' 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60579 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60579 ']' 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.419 03:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.419 [2024-11-20 03:13:33.807459] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:44.419 [2024-11-20 03:13:33.807576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.419 [2024-11-20 03:13:33.982490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.678 [2024-11-20 03:13:34.093245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.678 [2024-11-20 03:13:34.292025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.678 [2024-11-20 03:13:34.292073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.245 [2024-11-20 03:13:34.646250] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:45.245 [2024-11-20 03:13:34.646306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:45.245 [2024-11-20 03:13:34.646316] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.245 [2024-11-20 03:13:34.646326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.245 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.245 "name": "Existed_Raid", 00:06:45.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.245 "strip_size_kb": 64, 00:06:45.245 "state": "configuring", 00:06:45.245 "raid_level": "raid0", 00:06:45.245 "superblock": false, 00:06:45.245 "num_base_bdevs": 2, 00:06:45.245 "num_base_bdevs_discovered": 0, 00:06:45.245 "num_base_bdevs_operational": 2, 00:06:45.246 "base_bdevs_list": [ 00:06:45.246 { 00:06:45.246 "name": "BaseBdev1", 00:06:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.246 "is_configured": false, 00:06:45.246 "data_offset": 0, 00:06:45.246 "data_size": 0 00:06:45.246 }, 00:06:45.246 { 00:06:45.246 "name": "BaseBdev2", 00:06:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.246 "is_configured": false, 00:06:45.246 "data_offset": 0, 00:06:45.246 "data_size": 0 00:06:45.246 } 00:06:45.246 ] 00:06:45.246 }' 00:06:45.246 03:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.246 03:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.504 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 [2024-11-20 03:13:35.077473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:45.505 [2024-11-20 03:13:35.077512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 [2024-11-20 03:13:35.089432] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:45.505 [2024-11-20 03:13:35.089478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:45.505 [2024-11-20 03:13:35.089487] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.505 [2024-11-20 03:13:35.089498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.505 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 [2024-11-20 03:13:35.136770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.764 BaseBdev1 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.764 [ 00:06:45.764 { 00:06:45.764 "name": "BaseBdev1", 00:06:45.764 "aliases": [ 00:06:45.764 "86cd85f9-836c-497d-bf55-823610aee171" 00:06:45.764 ], 00:06:45.764 "product_name": "Malloc disk", 00:06:45.764 "block_size": 512, 00:06:45.764 "num_blocks": 65536, 00:06:45.764 "uuid": "86cd85f9-836c-497d-bf55-823610aee171", 00:06:45.764 "assigned_rate_limits": { 00:06:45.764 "rw_ios_per_sec": 0, 00:06:45.764 "rw_mbytes_per_sec": 0, 00:06:45.764 "r_mbytes_per_sec": 0, 00:06:45.764 "w_mbytes_per_sec": 0 00:06:45.764 }, 00:06:45.764 "claimed": true, 00:06:45.764 "claim_type": "exclusive_write", 00:06:45.764 "zoned": false, 00:06:45.764 "supported_io_types": { 00:06:45.764 "read": true, 00:06:45.764 "write": true, 00:06:45.764 "unmap": true, 00:06:45.764 "flush": true, 00:06:45.764 "reset": true, 00:06:45.764 "nvme_admin": false, 00:06:45.764 "nvme_io": false, 00:06:45.764 "nvme_io_md": false, 00:06:45.764 "write_zeroes": true, 00:06:45.764 "zcopy": true, 00:06:45.764 "get_zone_info": false, 00:06:45.764 "zone_management": false, 00:06:45.764 "zone_append": false, 00:06:45.764 "compare": false, 00:06:45.764 "compare_and_write": false, 00:06:45.764 "abort": true, 00:06:45.764 "seek_hole": false, 00:06:45.764 "seek_data": false, 00:06:45.764 "copy": true, 00:06:45.764 "nvme_iov_md": false 00:06:45.764 }, 00:06:45.764 "memory_domains": [ 00:06:45.764 { 00:06:45.764 "dma_device_id": "system", 00:06:45.764 "dma_device_type": 1 00:06:45.764 }, 00:06:45.764 { 00:06:45.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.764 "dma_device_type": 2 00:06:45.764 } 00:06:45.764 ], 00:06:45.764 "driver_specific": {} 00:06:45.764 } 00:06:45.764 ] 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.764 "name": "Existed_Raid", 00:06:45.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.764 "strip_size_kb": 64, 00:06:45.764 "state": "configuring", 00:06:45.764 "raid_level": "raid0", 00:06:45.764 "superblock": false, 00:06:45.764 "num_base_bdevs": 2, 00:06:45.764 "num_base_bdevs_discovered": 1, 00:06:45.764 "num_base_bdevs_operational": 2, 00:06:45.764 "base_bdevs_list": [ 00:06:45.764 { 00:06:45.764 "name": "BaseBdev1", 00:06:45.764 "uuid": "86cd85f9-836c-497d-bf55-823610aee171", 00:06:45.764 "is_configured": true, 00:06:45.764 "data_offset": 0, 00:06:45.764 "data_size": 65536 00:06:45.764 }, 00:06:45.764 { 00:06:45.764 "name": "BaseBdev2", 00:06:45.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.764 "is_configured": false, 00:06:45.764 "data_offset": 0, 00:06:45.764 "data_size": 0 00:06:45.764 } 00:06:45.764 ] 00:06:45.764 }' 00:06:45.764 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.765 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.358 [2024-11-20 03:13:35.675918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:46.358 [2024-11-20 03:13:35.675982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.358 [2024-11-20 03:13:35.687933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:46.358 [2024-11-20 03:13:35.689797] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:46.358 [2024-11-20 03:13:35.689837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.358 "name": "Existed_Raid", 00:06:46.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.358 "strip_size_kb": 64, 00:06:46.358 "state": "configuring", 00:06:46.358 "raid_level": "raid0", 00:06:46.358 "superblock": false, 00:06:46.358 "num_base_bdevs": 2, 00:06:46.358 "num_base_bdevs_discovered": 1, 00:06:46.358 "num_base_bdevs_operational": 2, 00:06:46.358 "base_bdevs_list": [ 00:06:46.358 { 00:06:46.358 "name": "BaseBdev1", 00:06:46.358 "uuid": "86cd85f9-836c-497d-bf55-823610aee171", 00:06:46.358 "is_configured": true, 00:06:46.358 "data_offset": 0, 00:06:46.358 "data_size": 65536 00:06:46.358 }, 00:06:46.358 { 00:06:46.358 "name": "BaseBdev2", 00:06:46.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.358 "is_configured": false, 00:06:46.358 "data_offset": 0, 00:06:46.358 "data_size": 0 00:06:46.358 } 00:06:46.358 ] 00:06:46.358 }' 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.358 03:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.617 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:46.617 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.617 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.617 [2024-11-20 03:13:36.172605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:46.617 [2024-11-20 03:13:36.172677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:46.617 [2024-11-20 03:13:36.172687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.617 [2024-11-20 03:13:36.172956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.617 [2024-11-20 03:13:36.173144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:46.618 [2024-11-20 03:13:36.173172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:46.618 [2024-11-20 03:13:36.173489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.618 BaseBdev2 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 [ 00:06:46.618 { 00:06:46.618 "name": "BaseBdev2", 00:06:46.618 "aliases": [ 00:06:46.618 "a980ae5f-5034-4690-88bf-537a72b66086" 00:06:46.618 ], 00:06:46.618 "product_name": "Malloc disk", 00:06:46.618 "block_size": 512, 00:06:46.618 "num_blocks": 65536, 00:06:46.618 "uuid": "a980ae5f-5034-4690-88bf-537a72b66086", 00:06:46.618 "assigned_rate_limits": { 00:06:46.618 "rw_ios_per_sec": 0, 00:06:46.618 "rw_mbytes_per_sec": 0, 00:06:46.618 "r_mbytes_per_sec": 0, 00:06:46.618 "w_mbytes_per_sec": 0 00:06:46.618 }, 00:06:46.618 "claimed": true, 00:06:46.618 "claim_type": "exclusive_write", 00:06:46.618 "zoned": false, 00:06:46.618 "supported_io_types": { 00:06:46.618 "read": true, 00:06:46.618 "write": true, 00:06:46.618 "unmap": true, 00:06:46.618 "flush": true, 00:06:46.618 "reset": true, 00:06:46.618 "nvme_admin": false, 00:06:46.618 "nvme_io": false, 00:06:46.618 "nvme_io_md": false, 00:06:46.618 "write_zeroes": true, 00:06:46.618 "zcopy": true, 00:06:46.618 "get_zone_info": false, 00:06:46.618 "zone_management": false, 00:06:46.618 "zone_append": false, 00:06:46.618 "compare": false, 00:06:46.618 "compare_and_write": false, 00:06:46.618 "abort": true, 00:06:46.618 "seek_hole": false, 00:06:46.618 "seek_data": false, 00:06:46.618 "copy": true, 00:06:46.618 "nvme_iov_md": false 00:06:46.618 }, 00:06:46.618 "memory_domains": [ 00:06:46.618 { 00:06:46.618 "dma_device_id": "system", 00:06:46.618 "dma_device_type": 1 00:06:46.618 }, 00:06:46.618 { 00:06:46.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.618 "dma_device_type": 2 00:06:46.618 } 00:06:46.618 ], 00:06:46.618 "driver_specific": {} 00:06:46.618 } 00:06:46.618 ] 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.877 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.878 "name": "Existed_Raid", 00:06:46.878 "uuid": "9293a379-47aa-4b35-89ff-c0de5a8f4c1e", 00:06:46.878 "strip_size_kb": 64, 00:06:46.878 "state": "online", 00:06:46.878 "raid_level": "raid0", 00:06:46.878 "superblock": false, 00:06:46.878 "num_base_bdevs": 2, 00:06:46.878 "num_base_bdevs_discovered": 2, 00:06:46.878 "num_base_bdevs_operational": 2, 00:06:46.878 "base_bdevs_list": [ 00:06:46.878 { 00:06:46.878 "name": "BaseBdev1", 00:06:46.878 "uuid": "86cd85f9-836c-497d-bf55-823610aee171", 00:06:46.878 "is_configured": true, 00:06:46.878 "data_offset": 0, 00:06:46.878 "data_size": 65536 00:06:46.878 }, 00:06:46.878 { 00:06:46.878 "name": "BaseBdev2", 00:06:46.878 "uuid": "a980ae5f-5034-4690-88bf-537a72b66086", 00:06:46.878 "is_configured": true, 00:06:46.878 "data_offset": 0, 00:06:46.878 "data_size": 65536 00:06:46.878 } 00:06:46.878 ] 00:06:46.878 }' 00:06:46.878 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.878 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.137 [2024-11-20 03:13:36.700061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:47.137 "name": "Existed_Raid", 00:06:47.137 "aliases": [ 00:06:47.137 "9293a379-47aa-4b35-89ff-c0de5a8f4c1e" 00:06:47.137 ], 00:06:47.137 "product_name": "Raid Volume", 00:06:47.137 "block_size": 512, 00:06:47.137 "num_blocks": 131072, 00:06:47.137 "uuid": "9293a379-47aa-4b35-89ff-c0de5a8f4c1e", 00:06:47.137 "assigned_rate_limits": { 00:06:47.137 "rw_ios_per_sec": 0, 00:06:47.137 "rw_mbytes_per_sec": 0, 00:06:47.137 "r_mbytes_per_sec": 0, 00:06:47.137 "w_mbytes_per_sec": 0 00:06:47.137 }, 00:06:47.137 "claimed": false, 00:06:47.137 "zoned": false, 00:06:47.137 "supported_io_types": { 00:06:47.137 "read": true, 00:06:47.137 "write": true, 00:06:47.137 "unmap": true, 00:06:47.137 "flush": true, 00:06:47.137 "reset": true, 00:06:47.137 "nvme_admin": false, 00:06:47.137 "nvme_io": false, 00:06:47.137 "nvme_io_md": false, 00:06:47.137 "write_zeroes": true, 00:06:47.137 "zcopy": false, 00:06:47.137 "get_zone_info": false, 00:06:47.137 "zone_management": false, 00:06:47.137 "zone_append": false, 00:06:47.137 "compare": false, 00:06:47.137 "compare_and_write": false, 00:06:47.137 "abort": false, 00:06:47.137 "seek_hole": false, 00:06:47.137 "seek_data": false, 00:06:47.137 "copy": false, 00:06:47.137 "nvme_iov_md": false 00:06:47.137 }, 00:06:47.137 "memory_domains": [ 00:06:47.137 { 00:06:47.137 "dma_device_id": "system", 00:06:47.137 "dma_device_type": 1 00:06:47.137 }, 00:06:47.137 { 00:06:47.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.137 "dma_device_type": 2 00:06:47.137 }, 00:06:47.137 { 00:06:47.137 "dma_device_id": "system", 00:06:47.137 "dma_device_type": 1 00:06:47.137 }, 00:06:47.137 { 00:06:47.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.137 "dma_device_type": 2 00:06:47.137 } 00:06:47.137 ], 00:06:47.137 "driver_specific": { 00:06:47.137 "raid": { 00:06:47.137 "uuid": "9293a379-47aa-4b35-89ff-c0de5a8f4c1e", 00:06:47.137 "strip_size_kb": 64, 00:06:47.137 "state": "online", 00:06:47.137 "raid_level": "raid0", 00:06:47.137 "superblock": false, 00:06:47.137 "num_base_bdevs": 2, 00:06:47.137 "num_base_bdevs_discovered": 2, 00:06:47.137 "num_base_bdevs_operational": 2, 00:06:47.137 "base_bdevs_list": [ 00:06:47.137 { 00:06:47.137 "name": "BaseBdev1", 00:06:47.137 "uuid": "86cd85f9-836c-497d-bf55-823610aee171", 00:06:47.137 "is_configured": true, 00:06:47.137 "data_offset": 0, 00:06:47.137 "data_size": 65536 00:06:47.137 }, 00:06:47.137 { 00:06:47.137 "name": "BaseBdev2", 00:06:47.137 "uuid": "a980ae5f-5034-4690-88bf-537a72b66086", 00:06:47.137 "is_configured": true, 00:06:47.137 "data_offset": 0, 00:06:47.137 "data_size": 65536 00:06:47.137 } 00:06:47.137 ] 00:06:47.137 } 00:06:47.137 } 00:06:47.137 }' 00:06:47.137 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:47.396 BaseBdev2' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.396 03:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.396 [2024-11-20 03:13:36.931428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:47.396 [2024-11-20 03:13:36.931465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.396 [2024-11-20 03:13:36.931518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:47.396 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.655 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.655 "name": "Existed_Raid", 00:06:47.655 "uuid": "9293a379-47aa-4b35-89ff-c0de5a8f4c1e", 00:06:47.655 "strip_size_kb": 64, 00:06:47.655 "state": "offline", 00:06:47.655 "raid_level": "raid0", 00:06:47.655 "superblock": false, 00:06:47.655 "num_base_bdevs": 2, 00:06:47.655 "num_base_bdevs_discovered": 1, 00:06:47.655 "num_base_bdevs_operational": 1, 00:06:47.655 "base_bdevs_list": [ 00:06:47.655 { 00:06:47.656 "name": null, 00:06:47.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.656 "is_configured": false, 00:06:47.656 "data_offset": 0, 00:06:47.656 "data_size": 65536 00:06:47.656 }, 00:06:47.656 { 00:06:47.656 "name": "BaseBdev2", 00:06:47.656 "uuid": "a980ae5f-5034-4690-88bf-537a72b66086", 00:06:47.656 "is_configured": true, 00:06:47.656 "data_offset": 0, 00:06:47.656 "data_size": 65536 00:06:47.656 } 00:06:47.656 ] 00:06:47.656 }' 00:06:47.656 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.656 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.915 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.915 [2024-11-20 03:13:37.512562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:47.915 [2024-11-20 03:13:37.512629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60579 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60579 ']' 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60579 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60579 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.173 killing process with pid 60579 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60579' 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60579 00:06:48.173 [2024-11-20 03:13:37.706529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.173 03:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60579 00:06:48.173 [2024-11-20 03:13:37.722871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.550 03:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:49.550 00:06:49.550 real 0m5.108s 00:06:49.550 user 0m7.397s 00:06:49.550 sys 0m0.831s 00:06:49.550 03:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.550 03:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.550 ************************************ 00:06:49.550 END TEST raid_state_function_test 00:06:49.550 ************************************ 00:06:49.550 03:13:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:49.550 03:13:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:49.550 03:13:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.550 03:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.550 ************************************ 00:06:49.550 START TEST raid_state_function_test_sb 00:06:49.550 ************************************ 00:06:49.550 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:49.550 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60832 00:06:49.551 Process raid pid: 60832 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60832' 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60832 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60832 ']' 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.551 03:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.551 [2024-11-20 03:13:38.983911] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:49.551 [2024-11-20 03:13:38.984031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.551 [2024-11-20 03:13:39.159443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.819 [2024-11-20 03:13:39.276832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.086 [2024-11-20 03:13:39.473402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.086 [2024-11-20 03:13:39.473447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.346 [2024-11-20 03:13:39.798817] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.346 [2024-11-20 03:13:39.798870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.346 [2024-11-20 03:13:39.798881] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.346 [2024-11-20 03:13:39.798890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.346 "name": "Existed_Raid", 00:06:50.346 "uuid": "5a481581-63e1-4c08-964f-24af1a13fe4f", 00:06:50.346 "strip_size_kb": 64, 00:06:50.346 "state": "configuring", 00:06:50.346 "raid_level": "raid0", 00:06:50.346 "superblock": true, 00:06:50.346 "num_base_bdevs": 2, 00:06:50.346 "num_base_bdevs_discovered": 0, 00:06:50.346 "num_base_bdevs_operational": 2, 00:06:50.346 "base_bdevs_list": [ 00:06:50.346 { 00:06:50.346 "name": "BaseBdev1", 00:06:50.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.346 "is_configured": false, 00:06:50.346 "data_offset": 0, 00:06:50.346 "data_size": 0 00:06:50.346 }, 00:06:50.346 { 00:06:50.346 "name": "BaseBdev2", 00:06:50.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.346 "is_configured": false, 00:06:50.346 "data_offset": 0, 00:06:50.346 "data_size": 0 00:06:50.346 } 00:06:50.346 ] 00:06:50.346 }' 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.346 03:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.913 [2024-11-20 03:13:40.254799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.913 [2024-11-20 03:13:40.254841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.913 [2024-11-20 03:13:40.266818] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.913 [2024-11-20 03:13:40.266875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.913 [2024-11-20 03:13:40.266900] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.913 [2024-11-20 03:13:40.266912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.913 [2024-11-20 03:13:40.315282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.913 BaseBdev1 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.913 [ 00:06:50.913 { 00:06:50.913 "name": "BaseBdev1", 00:06:50.913 "aliases": [ 00:06:50.913 "5c6c4ed6-9849-42f7-9d40-88f5a6d2b0b7" 00:06:50.913 ], 00:06:50.913 "product_name": "Malloc disk", 00:06:50.913 "block_size": 512, 00:06:50.913 "num_blocks": 65536, 00:06:50.913 "uuid": "5c6c4ed6-9849-42f7-9d40-88f5a6d2b0b7", 00:06:50.913 "assigned_rate_limits": { 00:06:50.913 "rw_ios_per_sec": 0, 00:06:50.913 "rw_mbytes_per_sec": 0, 00:06:50.913 "r_mbytes_per_sec": 0, 00:06:50.913 "w_mbytes_per_sec": 0 00:06:50.913 }, 00:06:50.913 "claimed": true, 00:06:50.913 "claim_type": "exclusive_write", 00:06:50.913 "zoned": false, 00:06:50.913 "supported_io_types": { 00:06:50.913 "read": true, 00:06:50.913 "write": true, 00:06:50.913 "unmap": true, 00:06:50.913 "flush": true, 00:06:50.913 "reset": true, 00:06:50.913 "nvme_admin": false, 00:06:50.913 "nvme_io": false, 00:06:50.913 "nvme_io_md": false, 00:06:50.913 "write_zeroes": true, 00:06:50.913 "zcopy": true, 00:06:50.913 "get_zone_info": false, 00:06:50.913 "zone_management": false, 00:06:50.913 "zone_append": false, 00:06:50.913 "compare": false, 00:06:50.913 "compare_and_write": false, 00:06:50.913 "abort": true, 00:06:50.913 "seek_hole": false, 00:06:50.913 "seek_data": false, 00:06:50.913 "copy": true, 00:06:50.913 "nvme_iov_md": false 00:06:50.913 }, 00:06:50.913 "memory_domains": [ 00:06:50.913 { 00:06:50.913 "dma_device_id": "system", 00:06:50.913 "dma_device_type": 1 00:06:50.913 }, 00:06:50.913 { 00:06:50.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.913 "dma_device_type": 2 00:06:50.913 } 00:06:50.913 ], 00:06:50.913 "driver_specific": {} 00:06:50.913 } 00:06:50.913 ] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.913 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.914 "name": "Existed_Raid", 00:06:50.914 "uuid": "8a66adae-0e40-4f12-9c51-253a84c0f736", 00:06:50.914 "strip_size_kb": 64, 00:06:50.914 "state": "configuring", 00:06:50.914 "raid_level": "raid0", 00:06:50.914 "superblock": true, 00:06:50.914 "num_base_bdevs": 2, 00:06:50.914 "num_base_bdevs_discovered": 1, 00:06:50.914 "num_base_bdevs_operational": 2, 00:06:50.914 "base_bdevs_list": [ 00:06:50.914 { 00:06:50.914 "name": "BaseBdev1", 00:06:50.914 "uuid": "5c6c4ed6-9849-42f7-9d40-88f5a6d2b0b7", 00:06:50.914 "is_configured": true, 00:06:50.914 "data_offset": 2048, 00:06:50.914 "data_size": 63488 00:06:50.914 }, 00:06:50.914 { 00:06:50.914 "name": "BaseBdev2", 00:06:50.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.914 "is_configured": false, 00:06:50.914 "data_offset": 0, 00:06:50.914 "data_size": 0 00:06:50.914 } 00:06:50.914 ] 00:06:50.914 }' 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.914 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.479 [2024-11-20 03:13:40.830793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.479 [2024-11-20 03:13:40.830856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.479 [2024-11-20 03:13:40.842832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.479 [2024-11-20 03:13:40.844788] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.479 [2024-11-20 03:13:40.844827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.479 "name": "Existed_Raid", 00:06:51.479 "uuid": "cd4af1a4-e0b7-4e92-b2e0-2ffa0fdeb94c", 00:06:51.479 "strip_size_kb": 64, 00:06:51.479 "state": "configuring", 00:06:51.479 "raid_level": "raid0", 00:06:51.479 "superblock": true, 00:06:51.479 "num_base_bdevs": 2, 00:06:51.479 "num_base_bdevs_discovered": 1, 00:06:51.479 "num_base_bdevs_operational": 2, 00:06:51.479 "base_bdevs_list": [ 00:06:51.479 { 00:06:51.479 "name": "BaseBdev1", 00:06:51.479 "uuid": "5c6c4ed6-9849-42f7-9d40-88f5a6d2b0b7", 00:06:51.479 "is_configured": true, 00:06:51.479 "data_offset": 2048, 00:06:51.479 "data_size": 63488 00:06:51.479 }, 00:06:51.479 { 00:06:51.479 "name": "BaseBdev2", 00:06:51.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.479 "is_configured": false, 00:06:51.479 "data_offset": 0, 00:06:51.479 "data_size": 0 00:06:51.479 } 00:06:51.479 ] 00:06:51.479 }' 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.479 03:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.737 [2024-11-20 03:13:41.285604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.737 [2024-11-20 03:13:41.285912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:51.737 [2024-11-20 03:13:41.285927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:51.737 [2024-11-20 03:13:41.286214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.737 [2024-11-20 03:13:41.286408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:51.737 [2024-11-20 03:13:41.286432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:51.737 BaseBdev2 00:06:51.737 [2024-11-20 03:13:41.286634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.737 [ 00:06:51.737 { 00:06:51.737 "name": "BaseBdev2", 00:06:51.737 "aliases": [ 00:06:51.737 "fae07cf8-facd-4abc-b2a5-1dafb6f3dc8c" 00:06:51.737 ], 00:06:51.737 "product_name": "Malloc disk", 00:06:51.737 "block_size": 512, 00:06:51.737 "num_blocks": 65536, 00:06:51.737 "uuid": "fae07cf8-facd-4abc-b2a5-1dafb6f3dc8c", 00:06:51.737 "assigned_rate_limits": { 00:06:51.737 "rw_ios_per_sec": 0, 00:06:51.737 "rw_mbytes_per_sec": 0, 00:06:51.737 "r_mbytes_per_sec": 0, 00:06:51.737 "w_mbytes_per_sec": 0 00:06:51.737 }, 00:06:51.737 "claimed": true, 00:06:51.737 "claim_type": "exclusive_write", 00:06:51.737 "zoned": false, 00:06:51.737 "supported_io_types": { 00:06:51.737 "read": true, 00:06:51.737 "write": true, 00:06:51.737 "unmap": true, 00:06:51.737 "flush": true, 00:06:51.737 "reset": true, 00:06:51.737 "nvme_admin": false, 00:06:51.737 "nvme_io": false, 00:06:51.737 "nvme_io_md": false, 00:06:51.737 "write_zeroes": true, 00:06:51.737 "zcopy": true, 00:06:51.737 "get_zone_info": false, 00:06:51.737 "zone_management": false, 00:06:51.737 "zone_append": false, 00:06:51.737 "compare": false, 00:06:51.737 "compare_and_write": false, 00:06:51.737 "abort": true, 00:06:51.737 "seek_hole": false, 00:06:51.737 "seek_data": false, 00:06:51.737 "copy": true, 00:06:51.737 "nvme_iov_md": false 00:06:51.737 }, 00:06:51.737 "memory_domains": [ 00:06:51.737 { 00:06:51.737 "dma_device_id": "system", 00:06:51.737 "dma_device_type": 1 00:06:51.737 }, 00:06:51.737 { 00:06:51.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.737 "dma_device_type": 2 00:06:51.737 } 00:06:51.737 ], 00:06:51.737 "driver_specific": {} 00:06:51.737 } 00:06:51.737 ] 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.737 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.996 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.996 "name": "Existed_Raid", 00:06:51.996 "uuid": "cd4af1a4-e0b7-4e92-b2e0-2ffa0fdeb94c", 00:06:51.996 "strip_size_kb": 64, 00:06:51.996 "state": "online", 00:06:51.996 "raid_level": "raid0", 00:06:51.996 "superblock": true, 00:06:51.996 "num_base_bdevs": 2, 00:06:51.996 "num_base_bdevs_discovered": 2, 00:06:51.996 "num_base_bdevs_operational": 2, 00:06:51.996 "base_bdevs_list": [ 00:06:51.996 { 00:06:51.996 "name": "BaseBdev1", 00:06:51.996 "uuid": "5c6c4ed6-9849-42f7-9d40-88f5a6d2b0b7", 00:06:51.996 "is_configured": true, 00:06:51.996 "data_offset": 2048, 00:06:51.996 "data_size": 63488 00:06:51.996 }, 00:06:51.996 { 00:06:51.996 "name": "BaseBdev2", 00:06:51.996 "uuid": "fae07cf8-facd-4abc-b2a5-1dafb6f3dc8c", 00:06:51.996 "is_configured": true, 00:06:51.996 "data_offset": 2048, 00:06:51.996 "data_size": 63488 00:06:51.996 } 00:06:51.996 ] 00:06:51.996 }' 00:06:51.996 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.996 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:52.255 [2024-11-20 03:13:41.785120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:52.255 "name": "Existed_Raid", 00:06:52.255 "aliases": [ 00:06:52.255 "cd4af1a4-e0b7-4e92-b2e0-2ffa0fdeb94c" 00:06:52.255 ], 00:06:52.255 "product_name": "Raid Volume", 00:06:52.255 "block_size": 512, 00:06:52.255 "num_blocks": 126976, 00:06:52.255 "uuid": "cd4af1a4-e0b7-4e92-b2e0-2ffa0fdeb94c", 00:06:52.255 "assigned_rate_limits": { 00:06:52.255 "rw_ios_per_sec": 0, 00:06:52.255 "rw_mbytes_per_sec": 0, 00:06:52.255 "r_mbytes_per_sec": 0, 00:06:52.255 "w_mbytes_per_sec": 0 00:06:52.255 }, 00:06:52.255 "claimed": false, 00:06:52.255 "zoned": false, 00:06:52.255 "supported_io_types": { 00:06:52.255 "read": true, 00:06:52.255 "write": true, 00:06:52.255 "unmap": true, 00:06:52.255 "flush": true, 00:06:52.255 "reset": true, 00:06:52.255 "nvme_admin": false, 00:06:52.255 "nvme_io": false, 00:06:52.255 "nvme_io_md": false, 00:06:52.255 "write_zeroes": true, 00:06:52.255 "zcopy": false, 00:06:52.255 "get_zone_info": false, 00:06:52.255 "zone_management": false, 00:06:52.255 "zone_append": false, 00:06:52.255 "compare": false, 00:06:52.255 "compare_and_write": false, 00:06:52.255 "abort": false, 00:06:52.255 "seek_hole": false, 00:06:52.255 "seek_data": false, 00:06:52.255 "copy": false, 00:06:52.255 "nvme_iov_md": false 00:06:52.255 }, 00:06:52.255 "memory_domains": [ 00:06:52.255 { 00:06:52.255 "dma_device_id": "system", 00:06:52.255 "dma_device_type": 1 00:06:52.255 }, 00:06:52.255 { 00:06:52.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.255 "dma_device_type": 2 00:06:52.255 }, 00:06:52.255 { 00:06:52.255 "dma_device_id": "system", 00:06:52.255 "dma_device_type": 1 00:06:52.255 }, 00:06:52.255 { 00:06:52.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.255 "dma_device_type": 2 00:06:52.255 } 00:06:52.255 ], 00:06:52.255 "driver_specific": { 00:06:52.255 "raid": { 00:06:52.255 "uuid": "cd4af1a4-e0b7-4e92-b2e0-2ffa0fdeb94c", 00:06:52.255 "strip_size_kb": 64, 00:06:52.255 "state": "online", 00:06:52.255 "raid_level": "raid0", 00:06:52.255 "superblock": true, 00:06:52.255 "num_base_bdevs": 2, 00:06:52.255 "num_base_bdevs_discovered": 2, 00:06:52.255 "num_base_bdevs_operational": 2, 00:06:52.255 "base_bdevs_list": [ 00:06:52.255 { 00:06:52.255 "name": "BaseBdev1", 00:06:52.255 "uuid": "5c6c4ed6-9849-42f7-9d40-88f5a6d2b0b7", 00:06:52.255 "is_configured": true, 00:06:52.255 "data_offset": 2048, 00:06:52.255 "data_size": 63488 00:06:52.255 }, 00:06:52.255 { 00:06:52.255 "name": "BaseBdev2", 00:06:52.255 "uuid": "fae07cf8-facd-4abc-b2a5-1dafb6f3dc8c", 00:06:52.255 "is_configured": true, 00:06:52.255 "data_offset": 2048, 00:06:52.255 "data_size": 63488 00:06:52.255 } 00:06:52.255 ] 00:06:52.255 } 00:06:52.255 } 00:06:52.255 }' 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:52.255 BaseBdev2' 00:06:52.255 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.515 03:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.515 [2024-11-20 03:13:42.028464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:52.515 [2024-11-20 03:13:42.028503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.515 [2024-11-20 03:13:42.028556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.515 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.774 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.774 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.774 "name": "Existed_Raid", 00:06:52.774 "uuid": "cd4af1a4-e0b7-4e92-b2e0-2ffa0fdeb94c", 00:06:52.774 "strip_size_kb": 64, 00:06:52.774 "state": "offline", 00:06:52.774 "raid_level": "raid0", 00:06:52.774 "superblock": true, 00:06:52.774 "num_base_bdevs": 2, 00:06:52.774 "num_base_bdevs_discovered": 1, 00:06:52.774 "num_base_bdevs_operational": 1, 00:06:52.774 "base_bdevs_list": [ 00:06:52.774 { 00:06:52.774 "name": null, 00:06:52.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.774 "is_configured": false, 00:06:52.774 "data_offset": 0, 00:06:52.774 "data_size": 63488 00:06:52.774 }, 00:06:52.774 { 00:06:52.774 "name": "BaseBdev2", 00:06:52.774 "uuid": "fae07cf8-facd-4abc-b2a5-1dafb6f3dc8c", 00:06:52.774 "is_configured": true, 00:06:52.774 "data_offset": 2048, 00:06:52.774 "data_size": 63488 00:06:52.774 } 00:06:52.774 ] 00:06:52.774 }' 00:06:52.774 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.774 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.033 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.033 [2024-11-20 03:13:42.601635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:53.033 [2024-11-20 03:13:42.601694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60832 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60832 ']' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60832 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60832 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.292 killing process with pid 60832 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60832' 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60832 00:06:53.292 [2024-11-20 03:13:42.794880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.292 03:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60832 00:06:53.292 [2024-11-20 03:13:42.812152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.669 ************************************ 00:06:54.669 END TEST raid_state_function_test_sb 00:06:54.669 ************************************ 00:06:54.669 03:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:54.669 00:06:54.669 real 0m5.035s 00:06:54.669 user 0m7.277s 00:06:54.669 sys 0m0.805s 00:06:54.669 03:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.669 03:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.669 03:13:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:54.669 03:13:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:54.669 03:13:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.669 03:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.669 ************************************ 00:06:54.669 START TEST raid_superblock_test 00:06:54.669 ************************************ 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61084 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61084 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61084 ']' 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.669 03:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.669 [2024-11-20 03:13:44.079706] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:54.670 [2024-11-20 03:13:44.079928] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61084 ] 00:06:54.670 [2024-11-20 03:13:44.252490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.928 [2024-11-20 03:13:44.367433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.186 [2024-11-20 03:13:44.570747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.186 [2024-11-20 03:13:44.570859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.443 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.444 malloc1 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.444 [2024-11-20 03:13:44.966343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:55.444 [2024-11-20 03:13:44.966471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.444 [2024-11-20 03:13:44.966516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:55.444 [2024-11-20 03:13:44.966545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.444 [2024-11-20 03:13:44.968746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.444 [2024-11-20 03:13:44.968836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:55.444 pt1 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.444 03:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.444 malloc2 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.444 [2024-11-20 03:13:45.024864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:55.444 [2024-11-20 03:13:45.024971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.444 [2024-11-20 03:13:45.025030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:55.444 [2024-11-20 03:13:45.025061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.444 [2024-11-20 03:13:45.027361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.444 [2024-11-20 03:13:45.027439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:55.444 pt2 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.444 [2024-11-20 03:13:45.036908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:55.444 [2024-11-20 03:13:45.038819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:55.444 [2024-11-20 03:13:45.038984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.444 [2024-11-20 03:13:45.038998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:55.444 [2024-11-20 03:13:45.039258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:55.444 [2024-11-20 03:13:45.039420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.444 [2024-11-20 03:13:45.039432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:55.444 [2024-11-20 03:13:45.039603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.444 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.702 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.702 "name": "raid_bdev1", 00:06:55.702 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:55.702 "strip_size_kb": 64, 00:06:55.702 "state": "online", 00:06:55.702 "raid_level": "raid0", 00:06:55.702 "superblock": true, 00:06:55.702 "num_base_bdevs": 2, 00:06:55.702 "num_base_bdevs_discovered": 2, 00:06:55.702 "num_base_bdevs_operational": 2, 00:06:55.702 "base_bdevs_list": [ 00:06:55.702 { 00:06:55.702 "name": "pt1", 00:06:55.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:55.702 "is_configured": true, 00:06:55.702 "data_offset": 2048, 00:06:55.703 "data_size": 63488 00:06:55.703 }, 00:06:55.703 { 00:06:55.703 "name": "pt2", 00:06:55.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:55.703 "is_configured": true, 00:06:55.703 "data_offset": 2048, 00:06:55.703 "data_size": 63488 00:06:55.703 } 00:06:55.703 ] 00:06:55.703 }' 00:06:55.703 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.703 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.961 [2024-11-20 03:13:45.516351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.961 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:55.961 "name": "raid_bdev1", 00:06:55.961 "aliases": [ 00:06:55.961 "6f78c98d-e751-4a38-a65e-4da1d209f3ec" 00:06:55.961 ], 00:06:55.961 "product_name": "Raid Volume", 00:06:55.961 "block_size": 512, 00:06:55.961 "num_blocks": 126976, 00:06:55.961 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:55.961 "assigned_rate_limits": { 00:06:55.961 "rw_ios_per_sec": 0, 00:06:55.961 "rw_mbytes_per_sec": 0, 00:06:55.961 "r_mbytes_per_sec": 0, 00:06:55.961 "w_mbytes_per_sec": 0 00:06:55.961 }, 00:06:55.961 "claimed": false, 00:06:55.961 "zoned": false, 00:06:55.961 "supported_io_types": { 00:06:55.961 "read": true, 00:06:55.961 "write": true, 00:06:55.961 "unmap": true, 00:06:55.961 "flush": true, 00:06:55.961 "reset": true, 00:06:55.961 "nvme_admin": false, 00:06:55.961 "nvme_io": false, 00:06:55.961 "nvme_io_md": false, 00:06:55.961 "write_zeroes": true, 00:06:55.961 "zcopy": false, 00:06:55.961 "get_zone_info": false, 00:06:55.962 "zone_management": false, 00:06:55.962 "zone_append": false, 00:06:55.962 "compare": false, 00:06:55.962 "compare_and_write": false, 00:06:55.962 "abort": false, 00:06:55.962 "seek_hole": false, 00:06:55.962 "seek_data": false, 00:06:55.962 "copy": false, 00:06:55.962 "nvme_iov_md": false 00:06:55.962 }, 00:06:55.962 "memory_domains": [ 00:06:55.962 { 00:06:55.962 "dma_device_id": "system", 00:06:55.962 "dma_device_type": 1 00:06:55.962 }, 00:06:55.962 { 00:06:55.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.962 "dma_device_type": 2 00:06:55.962 }, 00:06:55.962 { 00:06:55.962 "dma_device_id": "system", 00:06:55.962 "dma_device_type": 1 00:06:55.962 }, 00:06:55.962 { 00:06:55.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.962 "dma_device_type": 2 00:06:55.962 } 00:06:55.962 ], 00:06:55.962 "driver_specific": { 00:06:55.962 "raid": { 00:06:55.962 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:55.962 "strip_size_kb": 64, 00:06:55.962 "state": "online", 00:06:55.962 "raid_level": "raid0", 00:06:55.962 "superblock": true, 00:06:55.962 "num_base_bdevs": 2, 00:06:55.962 "num_base_bdevs_discovered": 2, 00:06:55.962 "num_base_bdevs_operational": 2, 00:06:55.962 "base_bdevs_list": [ 00:06:55.962 { 00:06:55.962 "name": "pt1", 00:06:55.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:55.962 "is_configured": true, 00:06:55.962 "data_offset": 2048, 00:06:55.962 "data_size": 63488 00:06:55.962 }, 00:06:55.962 { 00:06:55.962 "name": "pt2", 00:06:55.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:55.962 "is_configured": true, 00:06:55.962 "data_offset": 2048, 00:06:55.962 "data_size": 63488 00:06:55.962 } 00:06:55.962 ] 00:06:55.962 } 00:06:55.962 } 00:06:55.962 }' 00:06:55.962 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:56.221 pt2' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:56.221 [2024-11-20 03:13:45.748029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6f78c98d-e751-4a38-a65e-4da1d209f3ec 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6f78c98d-e751-4a38-a65e-4da1d209f3ec ']' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.221 [2024-11-20 03:13:45.795593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:56.221 [2024-11-20 03:13:45.795683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.221 [2024-11-20 03:13:45.795805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.221 [2024-11-20 03:13:45.795896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.221 [2024-11-20 03:13:45.795917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.221 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 [2024-11-20 03:13:45.927396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:56.480 [2024-11-20 03:13:45.929340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:56.480 [2024-11-20 03:13:45.929404] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:56.480 [2024-11-20 03:13:45.929458] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:56.480 [2024-11-20 03:13:45.929473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:56.480 [2024-11-20 03:13:45.929486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:56.480 request: 00:06:56.480 { 00:06:56.480 "name": "raid_bdev1", 00:06:56.480 "raid_level": "raid0", 00:06:56.480 "base_bdevs": [ 00:06:56.480 "malloc1", 00:06:56.480 "malloc2" 00:06:56.480 ], 00:06:56.480 "strip_size_kb": 64, 00:06:56.480 "superblock": false, 00:06:56.480 "method": "bdev_raid_create", 00:06:56.480 "req_id": 1 00:06:56.480 } 00:06:56.480 Got JSON-RPC error response 00:06:56.480 response: 00:06:56.480 { 00:06:56.480 "code": -17, 00:06:56.480 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:56.480 } 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 [2024-11-20 03:13:45.991268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:56.480 [2024-11-20 03:13:45.991397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.480 [2024-11-20 03:13:45.991440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:56.480 [2024-11-20 03:13:45.991493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.480 [2024-11-20 03:13:45.993712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.480 [2024-11-20 03:13:45.993792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:56.480 [2024-11-20 03:13:45.993903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:56.480 [2024-11-20 03:13:45.994014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:56.480 pt1 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.480 03:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.480 "name": "raid_bdev1", 00:06:56.480 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:56.480 "strip_size_kb": 64, 00:06:56.480 "state": "configuring", 00:06:56.480 "raid_level": "raid0", 00:06:56.480 "superblock": true, 00:06:56.480 "num_base_bdevs": 2, 00:06:56.480 "num_base_bdevs_discovered": 1, 00:06:56.480 "num_base_bdevs_operational": 2, 00:06:56.480 "base_bdevs_list": [ 00:06:56.480 { 00:06:56.480 "name": "pt1", 00:06:56.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:56.480 "is_configured": true, 00:06:56.480 "data_offset": 2048, 00:06:56.480 "data_size": 63488 00:06:56.480 }, 00:06:56.480 { 00:06:56.480 "name": null, 00:06:56.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:56.480 "is_configured": false, 00:06:56.480 "data_offset": 2048, 00:06:56.480 "data_size": 63488 00:06:56.480 } 00:06:56.480 ] 00:06:56.480 }' 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.480 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.054 [2024-11-20 03:13:46.454787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:57.054 [2024-11-20 03:13:46.454921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.054 [2024-11-20 03:13:46.454948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:57.054 [2024-11-20 03:13:46.454959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.054 [2024-11-20 03:13:46.455428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.054 [2024-11-20 03:13:46.455451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:57.054 [2024-11-20 03:13:46.455533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:57.054 [2024-11-20 03:13:46.455558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:57.054 [2024-11-20 03:13:46.455695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:57.054 [2024-11-20 03:13:46.455708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.054 [2024-11-20 03:13:46.455940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:57.054 [2024-11-20 03:13:46.456101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:57.054 [2024-11-20 03:13:46.456111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:57.054 [2024-11-20 03:13:46.456257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.054 pt2 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.054 "name": "raid_bdev1", 00:06:57.054 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:57.054 "strip_size_kb": 64, 00:06:57.054 "state": "online", 00:06:57.054 "raid_level": "raid0", 00:06:57.054 "superblock": true, 00:06:57.054 "num_base_bdevs": 2, 00:06:57.054 "num_base_bdevs_discovered": 2, 00:06:57.054 "num_base_bdevs_operational": 2, 00:06:57.054 "base_bdevs_list": [ 00:06:57.054 { 00:06:57.054 "name": "pt1", 00:06:57.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:57.054 "is_configured": true, 00:06:57.054 "data_offset": 2048, 00:06:57.054 "data_size": 63488 00:06:57.054 }, 00:06:57.054 { 00:06:57.054 "name": "pt2", 00:06:57.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:57.054 "is_configured": true, 00:06:57.054 "data_offset": 2048, 00:06:57.054 "data_size": 63488 00:06:57.054 } 00:06:57.054 ] 00:06:57.054 }' 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.054 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:57.311 [2024-11-20 03:13:46.927008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.311 03:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.569 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:57.569 "name": "raid_bdev1", 00:06:57.569 "aliases": [ 00:06:57.569 "6f78c98d-e751-4a38-a65e-4da1d209f3ec" 00:06:57.569 ], 00:06:57.570 "product_name": "Raid Volume", 00:06:57.570 "block_size": 512, 00:06:57.570 "num_blocks": 126976, 00:06:57.570 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:57.570 "assigned_rate_limits": { 00:06:57.570 "rw_ios_per_sec": 0, 00:06:57.570 "rw_mbytes_per_sec": 0, 00:06:57.570 "r_mbytes_per_sec": 0, 00:06:57.570 "w_mbytes_per_sec": 0 00:06:57.570 }, 00:06:57.570 "claimed": false, 00:06:57.570 "zoned": false, 00:06:57.570 "supported_io_types": { 00:06:57.570 "read": true, 00:06:57.570 "write": true, 00:06:57.570 "unmap": true, 00:06:57.570 "flush": true, 00:06:57.570 "reset": true, 00:06:57.570 "nvme_admin": false, 00:06:57.570 "nvme_io": false, 00:06:57.570 "nvme_io_md": false, 00:06:57.570 "write_zeroes": true, 00:06:57.570 "zcopy": false, 00:06:57.570 "get_zone_info": false, 00:06:57.570 "zone_management": false, 00:06:57.570 "zone_append": false, 00:06:57.570 "compare": false, 00:06:57.570 "compare_and_write": false, 00:06:57.570 "abort": false, 00:06:57.570 "seek_hole": false, 00:06:57.570 "seek_data": false, 00:06:57.570 "copy": false, 00:06:57.570 "nvme_iov_md": false 00:06:57.570 }, 00:06:57.570 "memory_domains": [ 00:06:57.570 { 00:06:57.570 "dma_device_id": "system", 00:06:57.570 "dma_device_type": 1 00:06:57.570 }, 00:06:57.570 { 00:06:57.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.570 "dma_device_type": 2 00:06:57.570 }, 00:06:57.570 { 00:06:57.570 "dma_device_id": "system", 00:06:57.570 "dma_device_type": 1 00:06:57.570 }, 00:06:57.570 { 00:06:57.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.570 "dma_device_type": 2 00:06:57.570 } 00:06:57.570 ], 00:06:57.570 "driver_specific": { 00:06:57.570 "raid": { 00:06:57.570 "uuid": "6f78c98d-e751-4a38-a65e-4da1d209f3ec", 00:06:57.570 "strip_size_kb": 64, 00:06:57.570 "state": "online", 00:06:57.570 "raid_level": "raid0", 00:06:57.570 "superblock": true, 00:06:57.570 "num_base_bdevs": 2, 00:06:57.570 "num_base_bdevs_discovered": 2, 00:06:57.570 "num_base_bdevs_operational": 2, 00:06:57.570 "base_bdevs_list": [ 00:06:57.570 { 00:06:57.570 "name": "pt1", 00:06:57.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:57.570 "is_configured": true, 00:06:57.570 "data_offset": 2048, 00:06:57.570 "data_size": 63488 00:06:57.570 }, 00:06:57.570 { 00:06:57.570 "name": "pt2", 00:06:57.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:57.570 "is_configured": true, 00:06:57.570 "data_offset": 2048, 00:06:57.570 "data_size": 63488 00:06:57.570 } 00:06:57.570 ] 00:06:57.570 } 00:06:57.570 } 00:06:57.570 }' 00:06:57.570 03:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:57.570 pt2' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.570 [2024-11-20 03:13:47.159069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6f78c98d-e751-4a38-a65e-4da1d209f3ec '!=' 6f78c98d-e751-4a38-a65e-4da1d209f3ec ']' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61084 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61084 ']' 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61084 00:06:57.570 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61084 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61084' 00:06:57.829 killing process with pid 61084 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61084 00:06:57.829 [2024-11-20 03:13:47.243804] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.829 [2024-11-20 03:13:47.243991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.829 03:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61084 00:06:57.829 [2024-11-20 03:13:47.244088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.829 [2024-11-20 03:13:47.244147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:57.829 [2024-11-20 03:13:47.454318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.206 03:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:59.206 00:06:59.206 real 0m4.559s 00:06:59.206 user 0m6.456s 00:06:59.206 sys 0m0.761s 00:06:59.206 03:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.206 03:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.207 ************************************ 00:06:59.207 END TEST raid_superblock_test 00:06:59.207 ************************************ 00:06:59.207 03:13:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:59.207 03:13:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:59.207 03:13:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.207 03:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.207 ************************************ 00:06:59.207 START TEST raid_read_error_test 00:06:59.207 ************************************ 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NfJQQMIMDI 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61296 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61296 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61296 ']' 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.207 03:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.207 [2024-11-20 03:13:48.722130] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:59.207 [2024-11-20 03:13:48.722249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61296 ] 00:06:59.466 [2024-11-20 03:13:48.882159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.466 [2024-11-20 03:13:48.997849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.724 [2024-11-20 03:13:49.196870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.724 [2024-11-20 03:13:49.196933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 BaseBdev1_malloc 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 true 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.983 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.983 [2024-11-20 03:13:49.609684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:59.983 [2024-11-20 03:13:49.609739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.983 [2024-11-20 03:13:49.609760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:59.983 [2024-11-20 03:13:49.609770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.983 [2024-11-20 03:13:49.611876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.983 [2024-11-20 03:13:49.611918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:00.243 BaseBdev1 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.243 BaseBdev2_malloc 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.243 true 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.243 [2024-11-20 03:13:49.676660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:00.243 [2024-11-20 03:13:49.676712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.243 [2024-11-20 03:13:49.676730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:00.243 [2024-11-20 03:13:49.676740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.243 [2024-11-20 03:13:49.678804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.243 [2024-11-20 03:13:49.678930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:00.243 BaseBdev2 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.243 [2024-11-20 03:13:49.688702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.243 [2024-11-20 03:13:49.690480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.243 [2024-11-20 03:13:49.690700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:00.243 [2024-11-20 03:13:49.690719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.243 [2024-11-20 03:13:49.690956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:00.243 [2024-11-20 03:13:49.691131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:00.243 [2024-11-20 03:13:49.691143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:00.243 [2024-11-20 03:13:49.691302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.243 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.243 "name": "raid_bdev1", 00:07:00.243 "uuid": "23b03c39-f74a-4a09-9715-c57a36d9f752", 00:07:00.243 "strip_size_kb": 64, 00:07:00.243 "state": "online", 00:07:00.243 "raid_level": "raid0", 00:07:00.243 "superblock": true, 00:07:00.243 "num_base_bdevs": 2, 00:07:00.243 "num_base_bdevs_discovered": 2, 00:07:00.243 "num_base_bdevs_operational": 2, 00:07:00.244 "base_bdevs_list": [ 00:07:00.244 { 00:07:00.244 "name": "BaseBdev1", 00:07:00.244 "uuid": "c90503d1-b924-539b-a4f2-1f8f5401346f", 00:07:00.244 "is_configured": true, 00:07:00.244 "data_offset": 2048, 00:07:00.244 "data_size": 63488 00:07:00.244 }, 00:07:00.244 { 00:07:00.244 "name": "BaseBdev2", 00:07:00.244 "uuid": "7c21caab-0121-5920-b3e8-ea59edaf438e", 00:07:00.244 "is_configured": true, 00:07:00.244 "data_offset": 2048, 00:07:00.244 "data_size": 63488 00:07:00.244 } 00:07:00.244 ] 00:07:00.244 }' 00:07:00.244 03:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.244 03:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.811 03:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:00.811 03:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:00.811 [2024-11-20 03:13:50.236944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.748 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.749 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.749 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.749 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.749 "name": "raid_bdev1", 00:07:01.749 "uuid": "23b03c39-f74a-4a09-9715-c57a36d9f752", 00:07:01.749 "strip_size_kb": 64, 00:07:01.749 "state": "online", 00:07:01.749 "raid_level": "raid0", 00:07:01.749 "superblock": true, 00:07:01.749 "num_base_bdevs": 2, 00:07:01.749 "num_base_bdevs_discovered": 2, 00:07:01.749 "num_base_bdevs_operational": 2, 00:07:01.749 "base_bdevs_list": [ 00:07:01.749 { 00:07:01.749 "name": "BaseBdev1", 00:07:01.749 "uuid": "c90503d1-b924-539b-a4f2-1f8f5401346f", 00:07:01.749 "is_configured": true, 00:07:01.749 "data_offset": 2048, 00:07:01.749 "data_size": 63488 00:07:01.749 }, 00:07:01.749 { 00:07:01.749 "name": "BaseBdev2", 00:07:01.749 "uuid": "7c21caab-0121-5920-b3e8-ea59edaf438e", 00:07:01.749 "is_configured": true, 00:07:01.749 "data_offset": 2048, 00:07:01.749 "data_size": 63488 00:07:01.749 } 00:07:01.749 ] 00:07:01.749 }' 00:07:01.749 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.749 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.007 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:02.007 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.007 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.007 [2024-11-20 03:13:51.637003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:02.007 [2024-11-20 03:13:51.637097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.007 [2024-11-20 03:13:51.639841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.265 [2024-11-20 03:13:51.639929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.265 [2024-11-20 03:13:51.639968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.265 [2024-11-20 03:13:51.639980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:02.266 { 00:07:02.266 "results": [ 00:07:02.266 { 00:07:02.266 "job": "raid_bdev1", 00:07:02.266 "core_mask": "0x1", 00:07:02.266 "workload": "randrw", 00:07:02.266 "percentage": 50, 00:07:02.266 "status": "finished", 00:07:02.266 "queue_depth": 1, 00:07:02.266 "io_size": 131072, 00:07:02.266 "runtime": 1.401063, 00:07:02.266 "iops": 15975.013257790692, 00:07:02.266 "mibps": 1996.8766572238364, 00:07:02.266 "io_failed": 1, 00:07:02.266 "io_timeout": 0, 00:07:02.266 "avg_latency_us": 87.01242423728084, 00:07:02.266 "min_latency_us": 26.382532751091702, 00:07:02.266 "max_latency_us": 1395.1441048034935 00:07:02.266 } 00:07:02.266 ], 00:07:02.266 "core_count": 1 00:07:02.266 } 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61296 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61296 ']' 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61296 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61296 00:07:02.266 killing process with pid 61296 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61296' 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61296 00:07:02.266 [2024-11-20 03:13:51.686147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.266 03:13:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61296 00:07:02.266 [2024-11-20 03:13:51.828810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NfJQQMIMDI 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:03.645 00:07:03.645 real 0m4.363s 00:07:03.645 user 0m5.270s 00:07:03.645 sys 0m0.535s 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.645 ************************************ 00:07:03.645 END TEST raid_read_error_test 00:07:03.645 ************************************ 00:07:03.645 03:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.645 03:13:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:03.645 03:13:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.645 03:13:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.645 03:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.645 ************************************ 00:07:03.645 START TEST raid_write_error_test 00:07:03.645 ************************************ 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e3ZwCK65NP 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61436 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61436 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61436 ']' 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.645 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.645 [2024-11-20 03:13:53.153442] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:03.645 [2024-11-20 03:13:53.153667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61436 ] 00:07:03.904 [2024-11-20 03:13:53.330134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.904 [2024-11-20 03:13:53.440564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.162 [2024-11-20 03:13:53.638954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.162 [2024-11-20 03:13:53.639051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.421 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.421 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:04.421 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:04.421 03:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:04.421 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.421 03:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.421 BaseBdev1_malloc 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.421 true 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.421 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.421 [2024-11-20 03:13:54.051502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:04.421 [2024-11-20 03:13:54.051608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.422 [2024-11-20 03:13:54.051646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:04.422 [2024-11-20 03:13:54.051662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.422 [2024-11-20 03:13:54.053789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.422 [2024-11-20 03:13:54.053831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:04.681 BaseBdev1 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.682 BaseBdev2_malloc 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.682 true 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.682 [2024-11-20 03:13:54.120586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:04.682 [2024-11-20 03:13:54.120655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.682 [2024-11-20 03:13:54.120674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:04.682 [2024-11-20 03:13:54.120685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.682 [2024-11-20 03:13:54.122976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.682 [2024-11-20 03:13:54.123021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:04.682 BaseBdev2 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.682 [2024-11-20 03:13:54.132638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.682 [2024-11-20 03:13:54.134527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.682 [2024-11-20 03:13:54.134761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:04.682 [2024-11-20 03:13:54.134781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.682 [2024-11-20 03:13:54.135067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:04.682 [2024-11-20 03:13:54.135279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:04.682 [2024-11-20 03:13:54.135292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:04.682 [2024-11-20 03:13:54.135463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.682 "name": "raid_bdev1", 00:07:04.682 "uuid": "3b863b72-d88a-4cf7-b88c-78a05907861b", 00:07:04.682 "strip_size_kb": 64, 00:07:04.682 "state": "online", 00:07:04.682 "raid_level": "raid0", 00:07:04.682 "superblock": true, 00:07:04.682 "num_base_bdevs": 2, 00:07:04.682 "num_base_bdevs_discovered": 2, 00:07:04.682 "num_base_bdevs_operational": 2, 00:07:04.682 "base_bdevs_list": [ 00:07:04.682 { 00:07:04.682 "name": "BaseBdev1", 00:07:04.682 "uuid": "628837d2-3d54-542d-8fd5-249db2f69de1", 00:07:04.682 "is_configured": true, 00:07:04.682 "data_offset": 2048, 00:07:04.682 "data_size": 63488 00:07:04.682 }, 00:07:04.682 { 00:07:04.682 "name": "BaseBdev2", 00:07:04.682 "uuid": "e1a03508-17dc-54b7-aa54-1562509638fd", 00:07:04.682 "is_configured": true, 00:07:04.682 "data_offset": 2048, 00:07:04.682 "data_size": 63488 00:07:04.682 } 00:07:04.682 ] 00:07:04.682 }' 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.682 03:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.250 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:05.250 03:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:05.250 [2024-11-20 03:13:54.676824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.187 "name": "raid_bdev1", 00:07:06.187 "uuid": "3b863b72-d88a-4cf7-b88c-78a05907861b", 00:07:06.187 "strip_size_kb": 64, 00:07:06.187 "state": "online", 00:07:06.187 "raid_level": "raid0", 00:07:06.187 "superblock": true, 00:07:06.187 "num_base_bdevs": 2, 00:07:06.187 "num_base_bdevs_discovered": 2, 00:07:06.187 "num_base_bdevs_operational": 2, 00:07:06.187 "base_bdevs_list": [ 00:07:06.187 { 00:07:06.187 "name": "BaseBdev1", 00:07:06.187 "uuid": "628837d2-3d54-542d-8fd5-249db2f69de1", 00:07:06.187 "is_configured": true, 00:07:06.187 "data_offset": 2048, 00:07:06.187 "data_size": 63488 00:07:06.187 }, 00:07:06.187 { 00:07:06.187 "name": "BaseBdev2", 00:07:06.187 "uuid": "e1a03508-17dc-54b7-aa54-1562509638fd", 00:07:06.187 "is_configured": true, 00:07:06.187 "data_offset": 2048, 00:07:06.187 "data_size": 63488 00:07:06.187 } 00:07:06.187 ] 00:07:06.187 }' 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.187 03:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.446 [2024-11-20 03:13:56.016466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:06.446 [2024-11-20 03:13:56.016558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.446 [2024-11-20 03:13:56.019383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.446 [2024-11-20 03:13:56.019480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.446 [2024-11-20 03:13:56.019566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.446 [2024-11-20 03:13:56.019650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:06.446 { 00:07:06.446 "results": [ 00:07:06.446 { 00:07:06.446 "job": "raid_bdev1", 00:07:06.446 "core_mask": "0x1", 00:07:06.446 "workload": "randrw", 00:07:06.446 "percentage": 50, 00:07:06.446 "status": "finished", 00:07:06.446 "queue_depth": 1, 00:07:06.446 "io_size": 131072, 00:07:06.446 "runtime": 1.340599, 00:07:06.446 "iops": 16042.082680950829, 00:07:06.446 "mibps": 2005.2603351188536, 00:07:06.446 "io_failed": 1, 00:07:06.446 "io_timeout": 0, 00:07:06.446 "avg_latency_us": 86.710259663605, 00:07:06.446 "min_latency_us": 26.1589519650655, 00:07:06.446 "max_latency_us": 1466.6899563318777 00:07:06.446 } 00:07:06.446 ], 00:07:06.446 "core_count": 1 00:07:06.446 } 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61436 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61436 ']' 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61436 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61436 00:07:06.446 killing process with pid 61436 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61436' 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61436 00:07:06.446 [2024-11-20 03:13:56.064020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.446 03:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61436 00:07:06.705 [2024-11-20 03:13:56.198082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e3ZwCK65NP 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:08.088 ************************************ 00:07:08.088 END TEST raid_write_error_test 00:07:08.088 ************************************ 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:08.088 00:07:08.088 real 0m4.320s 00:07:08.088 user 0m5.206s 00:07:08.088 sys 0m0.497s 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.088 03:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.088 03:13:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:08.088 03:13:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:08.088 03:13:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:08.088 03:13:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.088 03:13:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.088 ************************************ 00:07:08.088 START TEST raid_state_function_test 00:07:08.088 ************************************ 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61578 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61578' 00:07:08.089 Process raid pid: 61578 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61578 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61578 ']' 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.089 03:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.089 [2024-11-20 03:13:57.531826] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:08.089 [2024-11-20 03:13:57.531972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.089 [2024-11-20 03:13:57.702215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.368 [2024-11-20 03:13:57.817683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.626 [2024-11-20 03:13:58.018773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.626 [2024-11-20 03:13:58.018922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.884 [2024-11-20 03:13:58.373953] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.884 [2024-11-20 03:13:58.374010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.884 [2024-11-20 03:13:58.374020] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.884 [2024-11-20 03:13:58.374030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.884 "name": "Existed_Raid", 00:07:08.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.884 "strip_size_kb": 64, 00:07:08.884 "state": "configuring", 00:07:08.884 "raid_level": "concat", 00:07:08.884 "superblock": false, 00:07:08.884 "num_base_bdevs": 2, 00:07:08.884 "num_base_bdevs_discovered": 0, 00:07:08.884 "num_base_bdevs_operational": 2, 00:07:08.884 "base_bdevs_list": [ 00:07:08.884 { 00:07:08.884 "name": "BaseBdev1", 00:07:08.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.884 "is_configured": false, 00:07:08.884 "data_offset": 0, 00:07:08.884 "data_size": 0 00:07:08.884 }, 00:07:08.884 { 00:07:08.884 "name": "BaseBdev2", 00:07:08.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.884 "is_configured": false, 00:07:08.884 "data_offset": 0, 00:07:08.884 "data_size": 0 00:07:08.884 } 00:07:08.884 ] 00:07:08.884 }' 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.884 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 [2024-11-20 03:13:58.821129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.452 [2024-11-20 03:13:58.821223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 [2024-11-20 03:13:58.833104] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.452 [2024-11-20 03:13:58.833192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.452 [2024-11-20 03:13:58.833220] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.452 [2024-11-20 03:13:58.833246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 [2024-11-20 03:13:58.877581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.452 BaseBdev1 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 [ 00:07:09.452 { 00:07:09.452 "name": "BaseBdev1", 00:07:09.452 "aliases": [ 00:07:09.452 "7a110946-a823-4c4a-bb7b-6fd284774cea" 00:07:09.452 ], 00:07:09.452 "product_name": "Malloc disk", 00:07:09.452 "block_size": 512, 00:07:09.452 "num_blocks": 65536, 00:07:09.452 "uuid": "7a110946-a823-4c4a-bb7b-6fd284774cea", 00:07:09.452 "assigned_rate_limits": { 00:07:09.452 "rw_ios_per_sec": 0, 00:07:09.452 "rw_mbytes_per_sec": 0, 00:07:09.452 "r_mbytes_per_sec": 0, 00:07:09.452 "w_mbytes_per_sec": 0 00:07:09.452 }, 00:07:09.452 "claimed": true, 00:07:09.452 "claim_type": "exclusive_write", 00:07:09.452 "zoned": false, 00:07:09.452 "supported_io_types": { 00:07:09.452 "read": true, 00:07:09.452 "write": true, 00:07:09.452 "unmap": true, 00:07:09.452 "flush": true, 00:07:09.452 "reset": true, 00:07:09.452 "nvme_admin": false, 00:07:09.452 "nvme_io": false, 00:07:09.452 "nvme_io_md": false, 00:07:09.452 "write_zeroes": true, 00:07:09.452 "zcopy": true, 00:07:09.452 "get_zone_info": false, 00:07:09.452 "zone_management": false, 00:07:09.452 "zone_append": false, 00:07:09.452 "compare": false, 00:07:09.452 "compare_and_write": false, 00:07:09.452 "abort": true, 00:07:09.452 "seek_hole": false, 00:07:09.452 "seek_data": false, 00:07:09.452 "copy": true, 00:07:09.452 "nvme_iov_md": false 00:07:09.452 }, 00:07:09.452 "memory_domains": [ 00:07:09.452 { 00:07:09.452 "dma_device_id": "system", 00:07:09.452 "dma_device_type": 1 00:07:09.452 }, 00:07:09.452 { 00:07:09.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.452 "dma_device_type": 2 00:07:09.452 } 00:07:09.452 ], 00:07:09.452 "driver_specific": {} 00:07:09.452 } 00:07:09.452 ] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:09.452 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.453 "name": "Existed_Raid", 00:07:09.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.453 "strip_size_kb": 64, 00:07:09.453 "state": "configuring", 00:07:09.453 "raid_level": "concat", 00:07:09.453 "superblock": false, 00:07:09.453 "num_base_bdevs": 2, 00:07:09.453 "num_base_bdevs_discovered": 1, 00:07:09.453 "num_base_bdevs_operational": 2, 00:07:09.453 "base_bdevs_list": [ 00:07:09.453 { 00:07:09.453 "name": "BaseBdev1", 00:07:09.453 "uuid": "7a110946-a823-4c4a-bb7b-6fd284774cea", 00:07:09.453 "is_configured": true, 00:07:09.453 "data_offset": 0, 00:07:09.453 "data_size": 65536 00:07:09.453 }, 00:07:09.453 { 00:07:09.453 "name": "BaseBdev2", 00:07:09.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.453 "is_configured": false, 00:07:09.453 "data_offset": 0, 00:07:09.453 "data_size": 0 00:07:09.453 } 00:07:09.453 ] 00:07:09.453 }' 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.453 03:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.720 [2024-11-20 03:13:59.340841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.720 [2024-11-20 03:13:59.340899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.720 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.984 [2024-11-20 03:13:59.352866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.984 [2024-11-20 03:13:59.354733] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.984 [2024-11-20 03:13:59.354844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.984 "name": "Existed_Raid", 00:07:09.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.984 "strip_size_kb": 64, 00:07:09.984 "state": "configuring", 00:07:09.984 "raid_level": "concat", 00:07:09.984 "superblock": false, 00:07:09.984 "num_base_bdevs": 2, 00:07:09.984 "num_base_bdevs_discovered": 1, 00:07:09.984 "num_base_bdevs_operational": 2, 00:07:09.984 "base_bdevs_list": [ 00:07:09.984 { 00:07:09.984 "name": "BaseBdev1", 00:07:09.984 "uuid": "7a110946-a823-4c4a-bb7b-6fd284774cea", 00:07:09.984 "is_configured": true, 00:07:09.984 "data_offset": 0, 00:07:09.984 "data_size": 65536 00:07:09.984 }, 00:07:09.984 { 00:07:09.984 "name": "BaseBdev2", 00:07:09.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.984 "is_configured": false, 00:07:09.984 "data_offset": 0, 00:07:09.984 "data_size": 0 00:07:09.984 } 00:07:09.984 ] 00:07:09.984 }' 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.984 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.243 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:10.243 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.243 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.503 [2024-11-20 03:13:59.881508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.503 [2024-11-20 03:13:59.881661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.503 [2024-11-20 03:13:59.881708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:10.503 [2024-11-20 03:13:59.882027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.503 [2024-11-20 03:13:59.882263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.503 [2024-11-20 03:13:59.882319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:10.503 [2024-11-20 03:13:59.882681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.503 BaseBdev2 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.503 [ 00:07:10.503 { 00:07:10.503 "name": "BaseBdev2", 00:07:10.503 "aliases": [ 00:07:10.503 "a0105edf-0d6b-4973-bd38-561bbfaed43a" 00:07:10.503 ], 00:07:10.503 "product_name": "Malloc disk", 00:07:10.503 "block_size": 512, 00:07:10.503 "num_blocks": 65536, 00:07:10.503 "uuid": "a0105edf-0d6b-4973-bd38-561bbfaed43a", 00:07:10.503 "assigned_rate_limits": { 00:07:10.503 "rw_ios_per_sec": 0, 00:07:10.503 "rw_mbytes_per_sec": 0, 00:07:10.503 "r_mbytes_per_sec": 0, 00:07:10.503 "w_mbytes_per_sec": 0 00:07:10.503 }, 00:07:10.503 "claimed": true, 00:07:10.503 "claim_type": "exclusive_write", 00:07:10.503 "zoned": false, 00:07:10.503 "supported_io_types": { 00:07:10.503 "read": true, 00:07:10.503 "write": true, 00:07:10.503 "unmap": true, 00:07:10.503 "flush": true, 00:07:10.503 "reset": true, 00:07:10.503 "nvme_admin": false, 00:07:10.503 "nvme_io": false, 00:07:10.503 "nvme_io_md": false, 00:07:10.503 "write_zeroes": true, 00:07:10.503 "zcopy": true, 00:07:10.503 "get_zone_info": false, 00:07:10.503 "zone_management": false, 00:07:10.503 "zone_append": false, 00:07:10.503 "compare": false, 00:07:10.503 "compare_and_write": false, 00:07:10.503 "abort": true, 00:07:10.503 "seek_hole": false, 00:07:10.503 "seek_data": false, 00:07:10.503 "copy": true, 00:07:10.503 "nvme_iov_md": false 00:07:10.503 }, 00:07:10.503 "memory_domains": [ 00:07:10.503 { 00:07:10.503 "dma_device_id": "system", 00:07:10.503 "dma_device_type": 1 00:07:10.503 }, 00:07:10.503 { 00:07:10.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.503 "dma_device_type": 2 00:07:10.503 } 00:07:10.503 ], 00:07:10.503 "driver_specific": {} 00:07:10.503 } 00:07:10.503 ] 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.503 "name": "Existed_Raid", 00:07:10.503 "uuid": "bfb3c2ef-87fa-48be-a9d7-eaa8419b5c8e", 00:07:10.503 "strip_size_kb": 64, 00:07:10.503 "state": "online", 00:07:10.503 "raid_level": "concat", 00:07:10.503 "superblock": false, 00:07:10.503 "num_base_bdevs": 2, 00:07:10.503 "num_base_bdevs_discovered": 2, 00:07:10.503 "num_base_bdevs_operational": 2, 00:07:10.503 "base_bdevs_list": [ 00:07:10.503 { 00:07:10.503 "name": "BaseBdev1", 00:07:10.503 "uuid": "7a110946-a823-4c4a-bb7b-6fd284774cea", 00:07:10.503 "is_configured": true, 00:07:10.503 "data_offset": 0, 00:07:10.503 "data_size": 65536 00:07:10.503 }, 00:07:10.503 { 00:07:10.503 "name": "BaseBdev2", 00:07:10.503 "uuid": "a0105edf-0d6b-4973-bd38-561bbfaed43a", 00:07:10.503 "is_configured": true, 00:07:10.503 "data_offset": 0, 00:07:10.503 "data_size": 65536 00:07:10.503 } 00:07:10.503 ] 00:07:10.503 }' 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.503 03:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.762 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.021 [2024-11-20 03:14:00.396982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.021 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.021 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.021 "name": "Existed_Raid", 00:07:11.021 "aliases": [ 00:07:11.021 "bfb3c2ef-87fa-48be-a9d7-eaa8419b5c8e" 00:07:11.021 ], 00:07:11.021 "product_name": "Raid Volume", 00:07:11.021 "block_size": 512, 00:07:11.021 "num_blocks": 131072, 00:07:11.021 "uuid": "bfb3c2ef-87fa-48be-a9d7-eaa8419b5c8e", 00:07:11.021 "assigned_rate_limits": { 00:07:11.021 "rw_ios_per_sec": 0, 00:07:11.021 "rw_mbytes_per_sec": 0, 00:07:11.021 "r_mbytes_per_sec": 0, 00:07:11.021 "w_mbytes_per_sec": 0 00:07:11.021 }, 00:07:11.021 "claimed": false, 00:07:11.021 "zoned": false, 00:07:11.021 "supported_io_types": { 00:07:11.021 "read": true, 00:07:11.021 "write": true, 00:07:11.021 "unmap": true, 00:07:11.021 "flush": true, 00:07:11.021 "reset": true, 00:07:11.021 "nvme_admin": false, 00:07:11.021 "nvme_io": false, 00:07:11.021 "nvme_io_md": false, 00:07:11.021 "write_zeroes": true, 00:07:11.021 "zcopy": false, 00:07:11.021 "get_zone_info": false, 00:07:11.021 "zone_management": false, 00:07:11.021 "zone_append": false, 00:07:11.021 "compare": false, 00:07:11.021 "compare_and_write": false, 00:07:11.021 "abort": false, 00:07:11.021 "seek_hole": false, 00:07:11.021 "seek_data": false, 00:07:11.021 "copy": false, 00:07:11.021 "nvme_iov_md": false 00:07:11.021 }, 00:07:11.021 "memory_domains": [ 00:07:11.021 { 00:07:11.021 "dma_device_id": "system", 00:07:11.021 "dma_device_type": 1 00:07:11.021 }, 00:07:11.021 { 00:07:11.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.021 "dma_device_type": 2 00:07:11.021 }, 00:07:11.021 { 00:07:11.021 "dma_device_id": "system", 00:07:11.021 "dma_device_type": 1 00:07:11.021 }, 00:07:11.021 { 00:07:11.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.021 "dma_device_type": 2 00:07:11.021 } 00:07:11.021 ], 00:07:11.021 "driver_specific": { 00:07:11.021 "raid": { 00:07:11.021 "uuid": "bfb3c2ef-87fa-48be-a9d7-eaa8419b5c8e", 00:07:11.021 "strip_size_kb": 64, 00:07:11.021 "state": "online", 00:07:11.021 "raid_level": "concat", 00:07:11.021 "superblock": false, 00:07:11.021 "num_base_bdevs": 2, 00:07:11.021 "num_base_bdevs_discovered": 2, 00:07:11.021 "num_base_bdevs_operational": 2, 00:07:11.021 "base_bdevs_list": [ 00:07:11.021 { 00:07:11.021 "name": "BaseBdev1", 00:07:11.021 "uuid": "7a110946-a823-4c4a-bb7b-6fd284774cea", 00:07:11.021 "is_configured": true, 00:07:11.021 "data_offset": 0, 00:07:11.021 "data_size": 65536 00:07:11.021 }, 00:07:11.021 { 00:07:11.021 "name": "BaseBdev2", 00:07:11.021 "uuid": "a0105edf-0d6b-4973-bd38-561bbfaed43a", 00:07:11.021 "is_configured": true, 00:07:11.021 "data_offset": 0, 00:07:11.021 "data_size": 65536 00:07:11.021 } 00:07:11.021 ] 00:07:11.021 } 00:07:11.021 } 00:07:11.021 }' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:11.022 BaseBdev2' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.022 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.022 [2024-11-20 03:14:00.580423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:11.022 [2024-11-20 03:14:00.580509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.022 [2024-11-20 03:14:00.580584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.281 "name": "Existed_Raid", 00:07:11.281 "uuid": "bfb3c2ef-87fa-48be-a9d7-eaa8419b5c8e", 00:07:11.281 "strip_size_kb": 64, 00:07:11.281 "state": "offline", 00:07:11.281 "raid_level": "concat", 00:07:11.281 "superblock": false, 00:07:11.281 "num_base_bdevs": 2, 00:07:11.281 "num_base_bdevs_discovered": 1, 00:07:11.281 "num_base_bdevs_operational": 1, 00:07:11.281 "base_bdevs_list": [ 00:07:11.281 { 00:07:11.281 "name": null, 00:07:11.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.281 "is_configured": false, 00:07:11.281 "data_offset": 0, 00:07:11.281 "data_size": 65536 00:07:11.281 }, 00:07:11.281 { 00:07:11.281 "name": "BaseBdev2", 00:07:11.281 "uuid": "a0105edf-0d6b-4973-bd38-561bbfaed43a", 00:07:11.281 "is_configured": true, 00:07:11.281 "data_offset": 0, 00:07:11.281 "data_size": 65536 00:07:11.281 } 00:07:11.281 ] 00:07:11.281 }' 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.281 03:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.539 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.539 [2024-11-20 03:14:01.152801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:11.539 [2024-11-20 03:14:01.152920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61578 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61578 ']' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61578 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61578 00:07:11.798 killing process with pid 61578 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61578' 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61578 00:07:11.798 [2024-11-20 03:14:01.342981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.798 03:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61578 00:07:11.798 [2024-11-20 03:14:01.359978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:13.176 00:07:13.176 real 0m5.021s 00:07:13.176 user 0m7.253s 00:07:13.176 sys 0m0.841s 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.176 ************************************ 00:07:13.176 END TEST raid_state_function_test 00:07:13.176 ************************************ 00:07:13.176 03:14:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:13.176 03:14:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.176 03:14:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.176 03:14:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.176 ************************************ 00:07:13.176 START TEST raid_state_function_test_sb 00:07:13.176 ************************************ 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61827 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61827' 00:07:13.176 Process raid pid: 61827 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61827 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61827 ']' 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.176 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.176 [2024-11-20 03:14:02.624474] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:13.176 [2024-11-20 03:14:02.624701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.176 [2024-11-20 03:14:02.784492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.435 [2024-11-20 03:14:02.902403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.694 [2024-11-20 03:14:03.112244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.694 [2024-11-20 03:14:03.112373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.953 [2024-11-20 03:14:03.455177] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.953 [2024-11-20 03:14:03.455304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.953 [2024-11-20 03:14:03.455320] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.953 [2024-11-20 03:14:03.455331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.953 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.954 "name": "Existed_Raid", 00:07:13.954 "uuid": "e17dea33-f140-4f72-9666-61ea2a61ab8c", 00:07:13.954 "strip_size_kb": 64, 00:07:13.954 "state": "configuring", 00:07:13.954 "raid_level": "concat", 00:07:13.954 "superblock": true, 00:07:13.954 "num_base_bdevs": 2, 00:07:13.954 "num_base_bdevs_discovered": 0, 00:07:13.954 "num_base_bdevs_operational": 2, 00:07:13.954 "base_bdevs_list": [ 00:07:13.954 { 00:07:13.954 "name": "BaseBdev1", 00:07:13.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.954 "is_configured": false, 00:07:13.954 "data_offset": 0, 00:07:13.954 "data_size": 0 00:07:13.954 }, 00:07:13.954 { 00:07:13.954 "name": "BaseBdev2", 00:07:13.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.954 "is_configured": false, 00:07:13.954 "data_offset": 0, 00:07:13.954 "data_size": 0 00:07:13.954 } 00:07:13.954 ] 00:07:13.954 }' 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.954 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 [2024-11-20 03:14:03.934291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.523 [2024-11-20 03:14:03.934384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 [2024-11-20 03:14:03.946271] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.523 [2024-11-20 03:14:03.946360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.523 [2024-11-20 03:14:03.946389] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.523 [2024-11-20 03:14:03.946415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 [2024-11-20 03:14:03.990714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.523 BaseBdev1 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.523 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 [ 00:07:14.523 { 00:07:14.523 "name": "BaseBdev1", 00:07:14.523 "aliases": [ 00:07:14.523 "abacde83-6596-4d12-8be7-fcb79e543c87" 00:07:14.523 ], 00:07:14.523 "product_name": "Malloc disk", 00:07:14.523 "block_size": 512, 00:07:14.523 "num_blocks": 65536, 00:07:14.523 "uuid": "abacde83-6596-4d12-8be7-fcb79e543c87", 00:07:14.523 "assigned_rate_limits": { 00:07:14.523 "rw_ios_per_sec": 0, 00:07:14.523 "rw_mbytes_per_sec": 0, 00:07:14.523 "r_mbytes_per_sec": 0, 00:07:14.523 "w_mbytes_per_sec": 0 00:07:14.523 }, 00:07:14.523 "claimed": true, 00:07:14.523 "claim_type": "exclusive_write", 00:07:14.523 "zoned": false, 00:07:14.523 "supported_io_types": { 00:07:14.523 "read": true, 00:07:14.523 "write": true, 00:07:14.523 "unmap": true, 00:07:14.523 "flush": true, 00:07:14.523 "reset": true, 00:07:14.523 "nvme_admin": false, 00:07:14.523 "nvme_io": false, 00:07:14.523 "nvme_io_md": false, 00:07:14.523 "write_zeroes": true, 00:07:14.523 "zcopy": true, 00:07:14.523 "get_zone_info": false, 00:07:14.523 "zone_management": false, 00:07:14.523 "zone_append": false, 00:07:14.523 "compare": false, 00:07:14.523 "compare_and_write": false, 00:07:14.523 "abort": true, 00:07:14.523 "seek_hole": false, 00:07:14.523 "seek_data": false, 00:07:14.523 "copy": true, 00:07:14.523 "nvme_iov_md": false 00:07:14.523 }, 00:07:14.523 "memory_domains": [ 00:07:14.523 { 00:07:14.523 "dma_device_id": "system", 00:07:14.523 "dma_device_type": 1 00:07:14.523 }, 00:07:14.523 { 00:07:14.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.523 "dma_device_type": 2 00:07:14.523 } 00:07:14.523 ], 00:07:14.523 "driver_specific": {} 00:07:14.523 } 00:07:14.523 ] 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.523 "name": "Existed_Raid", 00:07:14.523 "uuid": "e5948b42-dbab-44eb-8943-428e408b2a35", 00:07:14.523 "strip_size_kb": 64, 00:07:14.523 "state": "configuring", 00:07:14.523 "raid_level": "concat", 00:07:14.523 "superblock": true, 00:07:14.523 "num_base_bdevs": 2, 00:07:14.523 "num_base_bdevs_discovered": 1, 00:07:14.523 "num_base_bdevs_operational": 2, 00:07:14.523 "base_bdevs_list": [ 00:07:14.523 { 00:07:14.523 "name": "BaseBdev1", 00:07:14.523 "uuid": "abacde83-6596-4d12-8be7-fcb79e543c87", 00:07:14.523 "is_configured": true, 00:07:14.523 "data_offset": 2048, 00:07:14.523 "data_size": 63488 00:07:14.523 }, 00:07:14.523 { 00:07:14.523 "name": "BaseBdev2", 00:07:14.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.523 "is_configured": false, 00:07:14.523 "data_offset": 0, 00:07:14.523 "data_size": 0 00:07:14.523 } 00:07:14.523 ] 00:07:14.523 }' 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.523 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.094 [2024-11-20 03:14:04.465953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.094 [2024-11-20 03:14:04.466075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.094 [2024-11-20 03:14:04.477977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.094 [2024-11-20 03:14:04.479955] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.094 [2024-11-20 03:14:04.480050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.094 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.094 "name": "Existed_Raid", 00:07:15.094 "uuid": "608f61a5-6560-4027-b70f-7df24888d3c1", 00:07:15.094 "strip_size_kb": 64, 00:07:15.094 "state": "configuring", 00:07:15.094 "raid_level": "concat", 00:07:15.094 "superblock": true, 00:07:15.094 "num_base_bdevs": 2, 00:07:15.094 "num_base_bdevs_discovered": 1, 00:07:15.094 "num_base_bdevs_operational": 2, 00:07:15.094 "base_bdevs_list": [ 00:07:15.094 { 00:07:15.094 "name": "BaseBdev1", 00:07:15.094 "uuid": "abacde83-6596-4d12-8be7-fcb79e543c87", 00:07:15.094 "is_configured": true, 00:07:15.094 "data_offset": 2048, 00:07:15.094 "data_size": 63488 00:07:15.094 }, 00:07:15.094 { 00:07:15.094 "name": "BaseBdev2", 00:07:15.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.094 "is_configured": false, 00:07:15.094 "data_offset": 0, 00:07:15.094 "data_size": 0 00:07:15.094 } 00:07:15.095 ] 00:07:15.095 }' 00:07:15.095 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.095 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 [2024-11-20 03:14:04.971504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.355 [2024-11-20 03:14:04.971878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.355 [2024-11-20 03:14:04.971900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.355 BaseBdev2 00:07:15.355 [2024-11-20 03:14:04.972300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.355 [2024-11-20 03:14:04.972465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.355 [2024-11-20 03:14:04.972482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.355 [2024-11-20 03:14:04.972619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.355 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.615 [ 00:07:15.615 { 00:07:15.615 "name": "BaseBdev2", 00:07:15.615 "aliases": [ 00:07:15.615 "96c4917b-66af-4064-8236-88796a7fc7af" 00:07:15.615 ], 00:07:15.615 "product_name": "Malloc disk", 00:07:15.615 "block_size": 512, 00:07:15.615 "num_blocks": 65536, 00:07:15.615 "uuid": "96c4917b-66af-4064-8236-88796a7fc7af", 00:07:15.615 "assigned_rate_limits": { 00:07:15.615 "rw_ios_per_sec": 0, 00:07:15.615 "rw_mbytes_per_sec": 0, 00:07:15.615 "r_mbytes_per_sec": 0, 00:07:15.615 "w_mbytes_per_sec": 0 00:07:15.615 }, 00:07:15.615 "claimed": true, 00:07:15.615 "claim_type": "exclusive_write", 00:07:15.615 "zoned": false, 00:07:15.615 "supported_io_types": { 00:07:15.615 "read": true, 00:07:15.615 "write": true, 00:07:15.615 "unmap": true, 00:07:15.615 "flush": true, 00:07:15.615 "reset": true, 00:07:15.615 "nvme_admin": false, 00:07:15.615 "nvme_io": false, 00:07:15.615 "nvme_io_md": false, 00:07:15.615 "write_zeroes": true, 00:07:15.615 "zcopy": true, 00:07:15.615 "get_zone_info": false, 00:07:15.615 "zone_management": false, 00:07:15.615 "zone_append": false, 00:07:15.615 "compare": false, 00:07:15.615 "compare_and_write": false, 00:07:15.615 "abort": true, 00:07:15.615 "seek_hole": false, 00:07:15.615 "seek_data": false, 00:07:15.615 "copy": true, 00:07:15.615 "nvme_iov_md": false 00:07:15.615 }, 00:07:15.615 "memory_domains": [ 00:07:15.615 { 00:07:15.615 "dma_device_id": "system", 00:07:15.615 "dma_device_type": 1 00:07:15.615 }, 00:07:15.615 { 00:07:15.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.615 "dma_device_type": 2 00:07:15.615 } 00:07:15.615 ], 00:07:15.615 "driver_specific": {} 00:07:15.615 } 00:07:15.615 ] 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.615 "name": "Existed_Raid", 00:07:15.615 "uuid": "608f61a5-6560-4027-b70f-7df24888d3c1", 00:07:15.615 "strip_size_kb": 64, 00:07:15.615 "state": "online", 00:07:15.615 "raid_level": "concat", 00:07:15.615 "superblock": true, 00:07:15.615 "num_base_bdevs": 2, 00:07:15.615 "num_base_bdevs_discovered": 2, 00:07:15.615 "num_base_bdevs_operational": 2, 00:07:15.615 "base_bdevs_list": [ 00:07:15.615 { 00:07:15.615 "name": "BaseBdev1", 00:07:15.615 "uuid": "abacde83-6596-4d12-8be7-fcb79e543c87", 00:07:15.615 "is_configured": true, 00:07:15.615 "data_offset": 2048, 00:07:15.615 "data_size": 63488 00:07:15.615 }, 00:07:15.615 { 00:07:15.615 "name": "BaseBdev2", 00:07:15.615 "uuid": "96c4917b-66af-4064-8236-88796a7fc7af", 00:07:15.615 "is_configured": true, 00:07:15.615 "data_offset": 2048, 00:07:15.615 "data_size": 63488 00:07:15.615 } 00:07:15.615 ] 00:07:15.615 }' 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.615 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.874 [2024-11-20 03:14:05.431098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.874 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.874 "name": "Existed_Raid", 00:07:15.874 "aliases": [ 00:07:15.874 "608f61a5-6560-4027-b70f-7df24888d3c1" 00:07:15.874 ], 00:07:15.874 "product_name": "Raid Volume", 00:07:15.874 "block_size": 512, 00:07:15.874 "num_blocks": 126976, 00:07:15.874 "uuid": "608f61a5-6560-4027-b70f-7df24888d3c1", 00:07:15.874 "assigned_rate_limits": { 00:07:15.874 "rw_ios_per_sec": 0, 00:07:15.874 "rw_mbytes_per_sec": 0, 00:07:15.874 "r_mbytes_per_sec": 0, 00:07:15.874 "w_mbytes_per_sec": 0 00:07:15.874 }, 00:07:15.874 "claimed": false, 00:07:15.874 "zoned": false, 00:07:15.874 "supported_io_types": { 00:07:15.874 "read": true, 00:07:15.874 "write": true, 00:07:15.874 "unmap": true, 00:07:15.874 "flush": true, 00:07:15.874 "reset": true, 00:07:15.874 "nvme_admin": false, 00:07:15.874 "nvme_io": false, 00:07:15.874 "nvme_io_md": false, 00:07:15.874 "write_zeroes": true, 00:07:15.874 "zcopy": false, 00:07:15.874 "get_zone_info": false, 00:07:15.874 "zone_management": false, 00:07:15.874 "zone_append": false, 00:07:15.874 "compare": false, 00:07:15.874 "compare_and_write": false, 00:07:15.874 "abort": false, 00:07:15.874 "seek_hole": false, 00:07:15.874 "seek_data": false, 00:07:15.874 "copy": false, 00:07:15.874 "nvme_iov_md": false 00:07:15.874 }, 00:07:15.874 "memory_domains": [ 00:07:15.874 { 00:07:15.874 "dma_device_id": "system", 00:07:15.875 "dma_device_type": 1 00:07:15.875 }, 00:07:15.875 { 00:07:15.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.875 "dma_device_type": 2 00:07:15.875 }, 00:07:15.875 { 00:07:15.875 "dma_device_id": "system", 00:07:15.875 "dma_device_type": 1 00:07:15.875 }, 00:07:15.875 { 00:07:15.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.875 "dma_device_type": 2 00:07:15.875 } 00:07:15.875 ], 00:07:15.875 "driver_specific": { 00:07:15.875 "raid": { 00:07:15.875 "uuid": "608f61a5-6560-4027-b70f-7df24888d3c1", 00:07:15.875 "strip_size_kb": 64, 00:07:15.875 "state": "online", 00:07:15.875 "raid_level": "concat", 00:07:15.875 "superblock": true, 00:07:15.875 "num_base_bdevs": 2, 00:07:15.875 "num_base_bdevs_discovered": 2, 00:07:15.875 "num_base_bdevs_operational": 2, 00:07:15.875 "base_bdevs_list": [ 00:07:15.875 { 00:07:15.875 "name": "BaseBdev1", 00:07:15.875 "uuid": "abacde83-6596-4d12-8be7-fcb79e543c87", 00:07:15.875 "is_configured": true, 00:07:15.875 "data_offset": 2048, 00:07:15.875 "data_size": 63488 00:07:15.875 }, 00:07:15.875 { 00:07:15.875 "name": "BaseBdev2", 00:07:15.875 "uuid": "96c4917b-66af-4064-8236-88796a7fc7af", 00:07:15.875 "is_configured": true, 00:07:15.875 "data_offset": 2048, 00:07:15.875 "data_size": 63488 00:07:15.875 } 00:07:15.875 ] 00:07:15.875 } 00:07:15.875 } 00:07:15.875 }' 00:07:15.875 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.134 BaseBdev2' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.134 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.134 [2024-11-20 03:14:05.654488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.135 [2024-11-20 03:14:05.654523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.135 [2024-11-20 03:14:05.654574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.395 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.395 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.395 "name": "Existed_Raid", 00:07:16.395 "uuid": "608f61a5-6560-4027-b70f-7df24888d3c1", 00:07:16.395 "strip_size_kb": 64, 00:07:16.395 "state": "offline", 00:07:16.395 "raid_level": "concat", 00:07:16.395 "superblock": true, 00:07:16.395 "num_base_bdevs": 2, 00:07:16.395 "num_base_bdevs_discovered": 1, 00:07:16.395 "num_base_bdevs_operational": 1, 00:07:16.395 "base_bdevs_list": [ 00:07:16.395 { 00:07:16.395 "name": null, 00:07:16.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.395 "is_configured": false, 00:07:16.395 "data_offset": 0, 00:07:16.395 "data_size": 63488 00:07:16.395 }, 00:07:16.395 { 00:07:16.395 "name": "BaseBdev2", 00:07:16.395 "uuid": "96c4917b-66af-4064-8236-88796a7fc7af", 00:07:16.395 "is_configured": true, 00:07:16.395 "data_offset": 2048, 00:07:16.395 "data_size": 63488 00:07:16.395 } 00:07:16.395 ] 00:07:16.395 }' 00:07:16.395 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.395 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.654 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.654 [2024-11-20 03:14:06.264772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.654 [2024-11-20 03:14:06.264832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61827 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61827 ']' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61827 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61827 00:07:16.914 killing process with pid 61827 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61827' 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61827 00:07:16.914 [2024-11-20 03:14:06.457578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.914 03:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61827 00:07:16.914 [2024-11-20 03:14:06.476009] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.299 03:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:18.299 00:07:18.299 real 0m5.064s 00:07:18.299 user 0m7.371s 00:07:18.299 sys 0m0.783s 00:07:18.299 03:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.299 03:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.299 ************************************ 00:07:18.299 END TEST raid_state_function_test_sb 00:07:18.299 ************************************ 00:07:18.299 03:14:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:18.299 03:14:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:18.299 03:14:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.299 03:14:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.299 ************************************ 00:07:18.299 START TEST raid_superblock_test 00:07:18.299 ************************************ 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62079 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62079 00:07:18.299 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62079 ']' 00:07:18.300 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.300 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.300 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.300 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.300 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.300 [2024-11-20 03:14:07.745280] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:18.300 [2024-11-20 03:14:07.745482] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62079 ] 00:07:18.300 [2024-11-20 03:14:07.902193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.560 [2024-11-20 03:14:08.019314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.820 [2024-11-20 03:14:08.228313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.820 [2024-11-20 03:14:08.228374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.080 malloc1 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.080 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.080 [2024-11-20 03:14:08.634207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.081 [2024-11-20 03:14:08.634337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.081 [2024-11-20 03:14:08.634383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:19.081 [2024-11-20 03:14:08.634414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.081 [2024-11-20 03:14:08.636793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.081 [2024-11-20 03:14:08.636884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.081 pt1 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.081 malloc2 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.081 [2024-11-20 03:14:08.693909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:19.081 [2024-11-20 03:14:08.693975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.081 [2024-11-20 03:14:08.693999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:19.081 [2024-11-20 03:14:08.694008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.081 [2024-11-20 03:14:08.696351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.081 [2024-11-20 03:14:08.696393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:19.081 pt2 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.081 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.081 [2024-11-20 03:14:08.705959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.081 [2024-11-20 03:14:08.707899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.081 [2024-11-20 03:14:08.708084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:19.081 [2024-11-20 03:14:08.708100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:19.081 [2024-11-20 03:14:08.708394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.081 [2024-11-20 03:14:08.708549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:19.081 [2024-11-20 03:14:08.708561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:19.081 [2024-11-20 03:14:08.708791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.342 "name": "raid_bdev1", 00:07:19.342 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:19.342 "strip_size_kb": 64, 00:07:19.342 "state": "online", 00:07:19.342 "raid_level": "concat", 00:07:19.342 "superblock": true, 00:07:19.342 "num_base_bdevs": 2, 00:07:19.342 "num_base_bdevs_discovered": 2, 00:07:19.342 "num_base_bdevs_operational": 2, 00:07:19.342 "base_bdevs_list": [ 00:07:19.342 { 00:07:19.342 "name": "pt1", 00:07:19.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.342 "is_configured": true, 00:07:19.342 "data_offset": 2048, 00:07:19.342 "data_size": 63488 00:07:19.342 }, 00:07:19.342 { 00:07:19.342 "name": "pt2", 00:07:19.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.342 "is_configured": true, 00:07:19.342 "data_offset": 2048, 00:07:19.342 "data_size": 63488 00:07:19.342 } 00:07:19.342 ] 00:07:19.342 }' 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.342 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.603 [2024-11-20 03:14:09.141473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.603 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.603 "name": "raid_bdev1", 00:07:19.603 "aliases": [ 00:07:19.603 "fdde5072-fb97-4cc3-9b68-708fec7f04cc" 00:07:19.603 ], 00:07:19.603 "product_name": "Raid Volume", 00:07:19.603 "block_size": 512, 00:07:19.603 "num_blocks": 126976, 00:07:19.603 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:19.603 "assigned_rate_limits": { 00:07:19.603 "rw_ios_per_sec": 0, 00:07:19.603 "rw_mbytes_per_sec": 0, 00:07:19.603 "r_mbytes_per_sec": 0, 00:07:19.603 "w_mbytes_per_sec": 0 00:07:19.603 }, 00:07:19.603 "claimed": false, 00:07:19.603 "zoned": false, 00:07:19.603 "supported_io_types": { 00:07:19.603 "read": true, 00:07:19.603 "write": true, 00:07:19.603 "unmap": true, 00:07:19.603 "flush": true, 00:07:19.603 "reset": true, 00:07:19.603 "nvme_admin": false, 00:07:19.603 "nvme_io": false, 00:07:19.603 "nvme_io_md": false, 00:07:19.603 "write_zeroes": true, 00:07:19.603 "zcopy": false, 00:07:19.603 "get_zone_info": false, 00:07:19.603 "zone_management": false, 00:07:19.603 "zone_append": false, 00:07:19.603 "compare": false, 00:07:19.603 "compare_and_write": false, 00:07:19.603 "abort": false, 00:07:19.603 "seek_hole": false, 00:07:19.603 "seek_data": false, 00:07:19.603 "copy": false, 00:07:19.603 "nvme_iov_md": false 00:07:19.603 }, 00:07:19.603 "memory_domains": [ 00:07:19.603 { 00:07:19.603 "dma_device_id": "system", 00:07:19.603 "dma_device_type": 1 00:07:19.603 }, 00:07:19.603 { 00:07:19.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.603 "dma_device_type": 2 00:07:19.603 }, 00:07:19.603 { 00:07:19.603 "dma_device_id": "system", 00:07:19.603 "dma_device_type": 1 00:07:19.603 }, 00:07:19.603 { 00:07:19.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.603 "dma_device_type": 2 00:07:19.603 } 00:07:19.603 ], 00:07:19.603 "driver_specific": { 00:07:19.603 "raid": { 00:07:19.603 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:19.603 "strip_size_kb": 64, 00:07:19.603 "state": "online", 00:07:19.603 "raid_level": "concat", 00:07:19.603 "superblock": true, 00:07:19.603 "num_base_bdevs": 2, 00:07:19.603 "num_base_bdevs_discovered": 2, 00:07:19.603 "num_base_bdevs_operational": 2, 00:07:19.603 "base_bdevs_list": [ 00:07:19.603 { 00:07:19.603 "name": "pt1", 00:07:19.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.603 "is_configured": true, 00:07:19.603 "data_offset": 2048, 00:07:19.603 "data_size": 63488 00:07:19.603 }, 00:07:19.603 { 00:07:19.603 "name": "pt2", 00:07:19.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.604 "is_configured": true, 00:07:19.604 "data_offset": 2048, 00:07:19.604 "data_size": 63488 00:07:19.604 } 00:07:19.604 ] 00:07:19.604 } 00:07:19.604 } 00:07:19.604 }' 00:07:19.604 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.604 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:19.604 pt2' 00:07:19.863 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.863 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.863 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.863 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.864 [2024-11-20 03:14:09.373119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fdde5072-fb97-4cc3-9b68-708fec7f04cc 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fdde5072-fb97-4cc3-9b68-708fec7f04cc ']' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.864 [2024-11-20 03:14:09.420721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.864 [2024-11-20 03:14:09.420754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.864 [2024-11-20 03:14:09.420853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.864 [2024-11-20 03:14:09.420904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.864 [2024-11-20 03:14:09.420919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.864 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.124 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.124 [2024-11-20 03:14:09.556520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:20.124 [2024-11-20 03:14:09.558563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:20.124 [2024-11-20 03:14:09.558653] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:20.124 [2024-11-20 03:14:09.558716] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:20.124 [2024-11-20 03:14:09.558734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.124 [2024-11-20 03:14:09.558745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:20.124 request: 00:07:20.124 { 00:07:20.124 "name": "raid_bdev1", 00:07:20.124 "raid_level": "concat", 00:07:20.124 "base_bdevs": [ 00:07:20.124 "malloc1", 00:07:20.124 "malloc2" 00:07:20.124 ], 00:07:20.124 "strip_size_kb": 64, 00:07:20.124 "superblock": false, 00:07:20.124 "method": "bdev_raid_create", 00:07:20.124 "req_id": 1 00:07:20.124 } 00:07:20.124 Got JSON-RPC error response 00:07:20.124 response: 00:07:20.124 { 00:07:20.124 "code": -17, 00:07:20.124 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:20.124 } 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.125 [2024-11-20 03:14:09.620365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.125 [2024-11-20 03:14:09.620501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.125 [2024-11-20 03:14:09.620545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:20.125 [2024-11-20 03:14:09.620595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.125 [2024-11-20 03:14:09.622975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.125 [2024-11-20 03:14:09.623057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.125 [2024-11-20 03:14:09.623206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:20.125 [2024-11-20 03:14:09.623330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.125 pt1 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.125 "name": "raid_bdev1", 00:07:20.125 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:20.125 "strip_size_kb": 64, 00:07:20.125 "state": "configuring", 00:07:20.125 "raid_level": "concat", 00:07:20.125 "superblock": true, 00:07:20.125 "num_base_bdevs": 2, 00:07:20.125 "num_base_bdevs_discovered": 1, 00:07:20.125 "num_base_bdevs_operational": 2, 00:07:20.125 "base_bdevs_list": [ 00:07:20.125 { 00:07:20.125 "name": "pt1", 00:07:20.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.125 "is_configured": true, 00:07:20.125 "data_offset": 2048, 00:07:20.125 "data_size": 63488 00:07:20.125 }, 00:07:20.125 { 00:07:20.125 "name": null, 00:07:20.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.125 "is_configured": false, 00:07:20.125 "data_offset": 2048, 00:07:20.125 "data_size": 63488 00:07:20.125 } 00:07:20.125 ] 00:07:20.125 }' 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.125 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.384 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:20.384 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:20.384 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.384 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.384 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.384 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.645 [2024-11-20 03:14:10.019760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.645 [2024-11-20 03:14:10.019850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.645 [2024-11-20 03:14:10.019874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:20.645 [2024-11-20 03:14:10.019887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.645 [2024-11-20 03:14:10.020391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.645 [2024-11-20 03:14:10.020414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.645 [2024-11-20 03:14:10.020504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:20.645 [2024-11-20 03:14:10.020530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.645 [2024-11-20 03:14:10.020677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.645 [2024-11-20 03:14:10.020690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.645 [2024-11-20 03:14:10.020940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:20.645 [2024-11-20 03:14:10.021119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.645 [2024-11-20 03:14:10.021132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:20.645 [2024-11-20 03:14:10.021284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.645 pt2 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.645 "name": "raid_bdev1", 00:07:20.645 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:20.645 "strip_size_kb": 64, 00:07:20.645 "state": "online", 00:07:20.645 "raid_level": "concat", 00:07:20.645 "superblock": true, 00:07:20.645 "num_base_bdevs": 2, 00:07:20.645 "num_base_bdevs_discovered": 2, 00:07:20.645 "num_base_bdevs_operational": 2, 00:07:20.645 "base_bdevs_list": [ 00:07:20.645 { 00:07:20.645 "name": "pt1", 00:07:20.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.645 "is_configured": true, 00:07:20.645 "data_offset": 2048, 00:07:20.645 "data_size": 63488 00:07:20.645 }, 00:07:20.645 { 00:07:20.645 "name": "pt2", 00:07:20.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.645 "is_configured": true, 00:07:20.645 "data_offset": 2048, 00:07:20.645 "data_size": 63488 00:07:20.645 } 00:07:20.645 ] 00:07:20.645 }' 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.645 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.905 [2024-11-20 03:14:10.423274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.905 "name": "raid_bdev1", 00:07:20.905 "aliases": [ 00:07:20.905 "fdde5072-fb97-4cc3-9b68-708fec7f04cc" 00:07:20.905 ], 00:07:20.905 "product_name": "Raid Volume", 00:07:20.905 "block_size": 512, 00:07:20.905 "num_blocks": 126976, 00:07:20.905 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:20.905 "assigned_rate_limits": { 00:07:20.905 "rw_ios_per_sec": 0, 00:07:20.905 "rw_mbytes_per_sec": 0, 00:07:20.905 "r_mbytes_per_sec": 0, 00:07:20.905 "w_mbytes_per_sec": 0 00:07:20.905 }, 00:07:20.905 "claimed": false, 00:07:20.905 "zoned": false, 00:07:20.905 "supported_io_types": { 00:07:20.905 "read": true, 00:07:20.905 "write": true, 00:07:20.905 "unmap": true, 00:07:20.905 "flush": true, 00:07:20.905 "reset": true, 00:07:20.905 "nvme_admin": false, 00:07:20.905 "nvme_io": false, 00:07:20.905 "nvme_io_md": false, 00:07:20.905 "write_zeroes": true, 00:07:20.905 "zcopy": false, 00:07:20.905 "get_zone_info": false, 00:07:20.905 "zone_management": false, 00:07:20.905 "zone_append": false, 00:07:20.905 "compare": false, 00:07:20.905 "compare_and_write": false, 00:07:20.905 "abort": false, 00:07:20.905 "seek_hole": false, 00:07:20.905 "seek_data": false, 00:07:20.905 "copy": false, 00:07:20.905 "nvme_iov_md": false 00:07:20.905 }, 00:07:20.905 "memory_domains": [ 00:07:20.905 { 00:07:20.905 "dma_device_id": "system", 00:07:20.905 "dma_device_type": 1 00:07:20.905 }, 00:07:20.905 { 00:07:20.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.905 "dma_device_type": 2 00:07:20.905 }, 00:07:20.905 { 00:07:20.905 "dma_device_id": "system", 00:07:20.905 "dma_device_type": 1 00:07:20.905 }, 00:07:20.905 { 00:07:20.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.905 "dma_device_type": 2 00:07:20.905 } 00:07:20.905 ], 00:07:20.905 "driver_specific": { 00:07:20.905 "raid": { 00:07:20.905 "uuid": "fdde5072-fb97-4cc3-9b68-708fec7f04cc", 00:07:20.905 "strip_size_kb": 64, 00:07:20.905 "state": "online", 00:07:20.905 "raid_level": "concat", 00:07:20.905 "superblock": true, 00:07:20.905 "num_base_bdevs": 2, 00:07:20.905 "num_base_bdevs_discovered": 2, 00:07:20.905 "num_base_bdevs_operational": 2, 00:07:20.905 "base_bdevs_list": [ 00:07:20.905 { 00:07:20.905 "name": "pt1", 00:07:20.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.905 "is_configured": true, 00:07:20.905 "data_offset": 2048, 00:07:20.905 "data_size": 63488 00:07:20.905 }, 00:07:20.905 { 00:07:20.905 "name": "pt2", 00:07:20.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.905 "is_configured": true, 00:07:20.905 "data_offset": 2048, 00:07:20.905 "data_size": 63488 00:07:20.905 } 00:07:20.905 ] 00:07:20.905 } 00:07:20.905 } 00:07:20.905 }' 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:20.905 pt2' 00:07:20.905 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.166 [2024-11-20 03:14:10.651005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fdde5072-fb97-4cc3-9b68-708fec7f04cc '!=' fdde5072-fb97-4cc3-9b68-708fec7f04cc ']' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62079 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62079 ']' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62079 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62079 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.166 killing process with pid 62079 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62079' 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62079 00:07:21.166 [2024-11-20 03:14:10.718716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.166 [2024-11-20 03:14:10.718823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.166 [2024-11-20 03:14:10.718878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.166 [2024-11-20 03:14:10.718892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:21.166 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62079 00:07:21.427 [2024-11-20 03:14:10.927840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.809 03:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:22.809 00:07:22.809 real 0m4.390s 00:07:22.809 user 0m6.161s 00:07:22.809 sys 0m0.682s 00:07:22.809 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.809 03:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 ************************************ 00:07:22.809 END TEST raid_superblock_test 00:07:22.809 ************************************ 00:07:22.809 03:14:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:22.809 03:14:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.809 03:14:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.809 03:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 ************************************ 00:07:22.809 START TEST raid_read_error_test 00:07:22.809 ************************************ 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g8R7O620v2 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62291 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62291 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62291 ']' 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.809 03:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 [2024-11-20 03:14:12.209041] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:22.809 [2024-11-20 03:14:12.209258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62291 ] 00:07:22.809 [2024-11-20 03:14:12.384654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.069 [2024-11-20 03:14:12.500289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.329 [2024-11-20 03:14:12.705187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.329 [2024-11-20 03:14:12.705343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.589 BaseBdev1_malloc 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.589 true 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.589 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.589 [2024-11-20 03:14:13.100087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:23.589 [2024-11-20 03:14:13.100148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.589 [2024-11-20 03:14:13.100170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:23.590 [2024-11-20 03:14:13.100180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.590 [2024-11-20 03:14:13.102552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.590 [2024-11-20 03:14:13.102607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:23.590 BaseBdev1 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.590 BaseBdev2_malloc 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.590 true 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.590 [2024-11-20 03:14:13.165723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:23.590 [2024-11-20 03:14:13.165780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.590 [2024-11-20 03:14:13.165814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:23.590 [2024-11-20 03:14:13.165824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.590 [2024-11-20 03:14:13.167955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.590 [2024-11-20 03:14:13.167993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:23.590 BaseBdev2 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.590 [2024-11-20 03:14:13.177777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.590 [2024-11-20 03:14:13.179630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.590 [2024-11-20 03:14:13.179821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.590 [2024-11-20 03:14:13.179836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.590 [2024-11-20 03:14:13.180073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:23.590 [2024-11-20 03:14:13.180246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.590 [2024-11-20 03:14:13.180258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:23.590 [2024-11-20 03:14:13.180425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.590 "name": "raid_bdev1", 00:07:23.590 "uuid": "21a1a7f7-2131-40c0-82a7-8788c38963d6", 00:07:23.590 "strip_size_kb": 64, 00:07:23.590 "state": "online", 00:07:23.590 "raid_level": "concat", 00:07:23.590 "superblock": true, 00:07:23.590 "num_base_bdevs": 2, 00:07:23.590 "num_base_bdevs_discovered": 2, 00:07:23.590 "num_base_bdevs_operational": 2, 00:07:23.590 "base_bdevs_list": [ 00:07:23.590 { 00:07:23.590 "name": "BaseBdev1", 00:07:23.590 "uuid": "67623b5b-ccb8-533f-b90c-9882a249d29f", 00:07:23.590 "is_configured": true, 00:07:23.590 "data_offset": 2048, 00:07:23.590 "data_size": 63488 00:07:23.590 }, 00:07:23.590 { 00:07:23.590 "name": "BaseBdev2", 00:07:23.590 "uuid": "44248412-42e5-5356-b1a7-000305625ca9", 00:07:23.590 "is_configured": true, 00:07:23.590 "data_offset": 2048, 00:07:23.590 "data_size": 63488 00:07:23.590 } 00:07:23.590 ] 00:07:23.590 }' 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.590 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.160 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.160 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.160 [2024-11-20 03:14:13.678092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.101 "name": "raid_bdev1", 00:07:25.101 "uuid": "21a1a7f7-2131-40c0-82a7-8788c38963d6", 00:07:25.101 "strip_size_kb": 64, 00:07:25.101 "state": "online", 00:07:25.101 "raid_level": "concat", 00:07:25.101 "superblock": true, 00:07:25.101 "num_base_bdevs": 2, 00:07:25.101 "num_base_bdevs_discovered": 2, 00:07:25.101 "num_base_bdevs_operational": 2, 00:07:25.101 "base_bdevs_list": [ 00:07:25.101 { 00:07:25.101 "name": "BaseBdev1", 00:07:25.101 "uuid": "67623b5b-ccb8-533f-b90c-9882a249d29f", 00:07:25.101 "is_configured": true, 00:07:25.101 "data_offset": 2048, 00:07:25.101 "data_size": 63488 00:07:25.101 }, 00:07:25.101 { 00:07:25.101 "name": "BaseBdev2", 00:07:25.101 "uuid": "44248412-42e5-5356-b1a7-000305625ca9", 00:07:25.101 "is_configured": true, 00:07:25.101 "data_offset": 2048, 00:07:25.101 "data_size": 63488 00:07:25.101 } 00:07:25.101 ] 00:07:25.101 }' 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.101 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.670 03:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.670 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.670 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.671 [2024-11-20 03:14:15.109070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.671 [2024-11-20 03:14:15.109111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.671 [2024-11-20 03:14:15.112151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.671 [2024-11-20 03:14:15.112199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.671 [2024-11-20 03:14:15.112235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.671 [2024-11-20 03:14:15.112251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:25.671 { 00:07:25.671 "results": [ 00:07:25.671 { 00:07:25.671 "job": "raid_bdev1", 00:07:25.671 "core_mask": "0x1", 00:07:25.671 "workload": "randrw", 00:07:25.671 "percentage": 50, 00:07:25.671 "status": "finished", 00:07:25.671 "queue_depth": 1, 00:07:25.671 "io_size": 131072, 00:07:25.671 "runtime": 1.431555, 00:07:25.671 "iops": 15733.93966700546, 00:07:25.671 "mibps": 1966.7424583756824, 00:07:25.671 "io_failed": 1, 00:07:25.671 "io_timeout": 0, 00:07:25.671 "avg_latency_us": 88.24912290565068, 00:07:25.671 "min_latency_us": 26.494323144104804, 00:07:25.671 "max_latency_us": 1681.3275109170306 00:07:25.671 } 00:07:25.671 ], 00:07:25.671 "core_count": 1 00:07:25.671 } 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62291 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62291 ']' 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62291 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62291 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62291' 00:07:25.671 killing process with pid 62291 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62291 00:07:25.671 [2024-11-20 03:14:15.160828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.671 03:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62291 00:07:25.671 [2024-11-20 03:14:15.300863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g8R7O620v2 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:27.052 ************************************ 00:07:27.052 END TEST raid_read_error_test 00:07:27.052 ************************************ 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:27.052 00:07:27.052 real 0m4.357s 00:07:27.052 user 0m5.225s 00:07:27.052 sys 0m0.548s 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.052 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.052 03:14:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:27.052 03:14:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.052 03:14:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.052 03:14:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.052 ************************************ 00:07:27.052 START TEST raid_write_error_test 00:07:27.052 ************************************ 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.seekIRozAy 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62432 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62432 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62432 ']' 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.052 03:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.052 [2024-11-20 03:14:16.632361] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:27.052 [2024-11-20 03:14:16.632577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62432 ] 00:07:27.312 [2024-11-20 03:14:16.806516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.312 [2024-11-20 03:14:16.918940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.572 [2024-11-20 03:14:17.116659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.572 [2024-11-20 03:14:17.116773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.862 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.862 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.862 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.862 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:27.862 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.862 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 BaseBdev1_malloc 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 true 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 [2024-11-20 03:14:17.534532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.125 [2024-11-20 03:14:17.534594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.125 [2024-11-20 03:14:17.534626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:28.125 [2024-11-20 03:14:17.534654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.125 [2024-11-20 03:14:17.536778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.125 [2024-11-20 03:14:17.536818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.125 BaseBdev1 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 BaseBdev2_malloc 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 true 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 [2024-11-20 03:14:17.601116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.125 [2024-11-20 03:14:17.601170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.125 [2024-11-20 03:14:17.601186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.125 [2024-11-20 03:14:17.601196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.125 [2024-11-20 03:14:17.603271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.125 [2024-11-20 03:14:17.603312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.125 BaseBdev2 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 [2024-11-20 03:14:17.613153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.125 [2024-11-20 03:14:17.614953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.125 [2024-11-20 03:14:17.615141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.125 [2024-11-20 03:14:17.615156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.125 [2024-11-20 03:14:17.615393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:28.125 [2024-11-20 03:14:17.615568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.125 [2024-11-20 03:14:17.615580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.125 [2024-11-20 03:14:17.615746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.125 "name": "raid_bdev1", 00:07:28.125 "uuid": "bd115090-85e2-480e-a54e-dba77dd1ddd7", 00:07:28.125 "strip_size_kb": 64, 00:07:28.125 "state": "online", 00:07:28.125 "raid_level": "concat", 00:07:28.125 "superblock": true, 00:07:28.125 "num_base_bdevs": 2, 00:07:28.125 "num_base_bdevs_discovered": 2, 00:07:28.125 "num_base_bdevs_operational": 2, 00:07:28.125 "base_bdevs_list": [ 00:07:28.125 { 00:07:28.125 "name": "BaseBdev1", 00:07:28.125 "uuid": "f0027aef-02a1-52e1-bac2-1c33c8bfb12d", 00:07:28.125 "is_configured": true, 00:07:28.125 "data_offset": 2048, 00:07:28.125 "data_size": 63488 00:07:28.125 }, 00:07:28.125 { 00:07:28.125 "name": "BaseBdev2", 00:07:28.125 "uuid": "e0a08ad5-102d-56d6-a461-9cc025f5e3ce", 00:07:28.125 "is_configured": true, 00:07:28.125 "data_offset": 2048, 00:07:28.125 "data_size": 63488 00:07:28.125 } 00:07:28.125 ] 00:07:28.125 }' 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.125 03:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.695 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:28.695 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.695 [2024-11-20 03:14:18.125521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.634 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.634 "name": "raid_bdev1", 00:07:29.634 "uuid": "bd115090-85e2-480e-a54e-dba77dd1ddd7", 00:07:29.634 "strip_size_kb": 64, 00:07:29.634 "state": "online", 00:07:29.634 "raid_level": "concat", 00:07:29.634 "superblock": true, 00:07:29.634 "num_base_bdevs": 2, 00:07:29.634 "num_base_bdevs_discovered": 2, 00:07:29.634 "num_base_bdevs_operational": 2, 00:07:29.634 "base_bdevs_list": [ 00:07:29.634 { 00:07:29.634 "name": "BaseBdev1", 00:07:29.634 "uuid": "f0027aef-02a1-52e1-bac2-1c33c8bfb12d", 00:07:29.634 "is_configured": true, 00:07:29.634 "data_offset": 2048, 00:07:29.634 "data_size": 63488 00:07:29.634 }, 00:07:29.635 { 00:07:29.635 "name": "BaseBdev2", 00:07:29.635 "uuid": "e0a08ad5-102d-56d6-a461-9cc025f5e3ce", 00:07:29.635 "is_configured": true, 00:07:29.635 "data_offset": 2048, 00:07:29.635 "data_size": 63488 00:07:29.635 } 00:07:29.635 ] 00:07:29.635 }' 00:07:29.635 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.635 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.894 [2024-11-20 03:14:19.473383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.894 [2024-11-20 03:14:19.473514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.894 [2024-11-20 03:14:19.476508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.894 [2024-11-20 03:14:19.476618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.894 [2024-11-20 03:14:19.476678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.894 [2024-11-20 03:14:19.476741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:29.894 { 00:07:29.894 "results": [ 00:07:29.894 { 00:07:29.894 "job": "raid_bdev1", 00:07:29.894 "core_mask": "0x1", 00:07:29.894 "workload": "randrw", 00:07:29.894 "percentage": 50, 00:07:29.894 "status": "finished", 00:07:29.894 "queue_depth": 1, 00:07:29.894 "io_size": 131072, 00:07:29.894 "runtime": 1.348716, 00:07:29.894 "iops": 15898.825253055498, 00:07:29.894 "mibps": 1987.3531566319373, 00:07:29.894 "io_failed": 1, 00:07:29.894 "io_timeout": 0, 00:07:29.894 "avg_latency_us": 87.32597206576038, 00:07:29.894 "min_latency_us": 26.382532751091702, 00:07:29.894 "max_latency_us": 1459.5353711790392 00:07:29.894 } 00:07:29.894 ], 00:07:29.894 "core_count": 1 00:07:29.894 } 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62432 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62432 ']' 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62432 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62432 00:07:29.894 killing process with pid 62432 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62432' 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62432 00:07:29.894 [2024-11-20 03:14:19.520810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.894 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62432 00:07:30.153 [2024-11-20 03:14:19.655697] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.seekIRozAy 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.536 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:31.536 ************************************ 00:07:31.536 END TEST raid_write_error_test 00:07:31.537 ************************************ 00:07:31.537 00:07:31.537 real 0m4.297s 00:07:31.537 user 0m5.141s 00:07:31.537 sys 0m0.508s 00:07:31.537 03:14:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.537 03:14:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.537 03:14:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:31.537 03:14:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:31.537 03:14:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.537 03:14:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.537 03:14:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.537 ************************************ 00:07:31.537 START TEST raid_state_function_test 00:07:31.537 ************************************ 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62570 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62570' 00:07:31.537 Process raid pid: 62570 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62570 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62570 ']' 00:07:31.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.537 03:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.537 [2024-11-20 03:14:20.988699] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:31.537 [2024-11-20 03:14:20.988915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.537 [2024-11-20 03:14:21.165358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.797 [2024-11-20 03:14:21.280448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.056 [2024-11-20 03:14:21.489557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.056 [2024-11-20 03:14:21.489709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 [2024-11-20 03:14:21.832888] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.316 [2024-11-20 03:14:21.833038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.316 [2024-11-20 03:14:21.833053] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.316 [2024-11-20 03:14:21.833063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.316 "name": "Existed_Raid", 00:07:32.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.316 "strip_size_kb": 0, 00:07:32.316 "state": "configuring", 00:07:32.316 "raid_level": "raid1", 00:07:32.316 "superblock": false, 00:07:32.316 "num_base_bdevs": 2, 00:07:32.316 "num_base_bdevs_discovered": 0, 00:07:32.316 "num_base_bdevs_operational": 2, 00:07:32.316 "base_bdevs_list": [ 00:07:32.316 { 00:07:32.316 "name": "BaseBdev1", 00:07:32.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.316 "is_configured": false, 00:07:32.316 "data_offset": 0, 00:07:32.316 "data_size": 0 00:07:32.316 }, 00:07:32.316 { 00:07:32.316 "name": "BaseBdev2", 00:07:32.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.316 "is_configured": false, 00:07:32.316 "data_offset": 0, 00:07:32.316 "data_size": 0 00:07:32.316 } 00:07:32.316 ] 00:07:32.316 }' 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.316 03:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.885 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.885 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.885 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.886 [2024-11-20 03:14:22.272080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.886 [2024-11-20 03:14:22.272189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.886 [2024-11-20 03:14:22.284044] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.886 [2024-11-20 03:14:22.284087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.886 [2024-11-20 03:14:22.284097] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.886 [2024-11-20 03:14:22.284107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.886 [2024-11-20 03:14:22.330552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.886 BaseBdev1 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.886 [ 00:07:32.886 { 00:07:32.886 "name": "BaseBdev1", 00:07:32.886 "aliases": [ 00:07:32.886 "576cf270-9879-4ddf-806e-2a5fb7d7fffc" 00:07:32.886 ], 00:07:32.886 "product_name": "Malloc disk", 00:07:32.886 "block_size": 512, 00:07:32.886 "num_blocks": 65536, 00:07:32.886 "uuid": "576cf270-9879-4ddf-806e-2a5fb7d7fffc", 00:07:32.886 "assigned_rate_limits": { 00:07:32.886 "rw_ios_per_sec": 0, 00:07:32.886 "rw_mbytes_per_sec": 0, 00:07:32.886 "r_mbytes_per_sec": 0, 00:07:32.886 "w_mbytes_per_sec": 0 00:07:32.886 }, 00:07:32.886 "claimed": true, 00:07:32.886 "claim_type": "exclusive_write", 00:07:32.886 "zoned": false, 00:07:32.886 "supported_io_types": { 00:07:32.886 "read": true, 00:07:32.886 "write": true, 00:07:32.886 "unmap": true, 00:07:32.886 "flush": true, 00:07:32.886 "reset": true, 00:07:32.886 "nvme_admin": false, 00:07:32.886 "nvme_io": false, 00:07:32.886 "nvme_io_md": false, 00:07:32.886 "write_zeroes": true, 00:07:32.886 "zcopy": true, 00:07:32.886 "get_zone_info": false, 00:07:32.886 "zone_management": false, 00:07:32.886 "zone_append": false, 00:07:32.886 "compare": false, 00:07:32.886 "compare_and_write": false, 00:07:32.886 "abort": true, 00:07:32.886 "seek_hole": false, 00:07:32.886 "seek_data": false, 00:07:32.886 "copy": true, 00:07:32.886 "nvme_iov_md": false 00:07:32.886 }, 00:07:32.886 "memory_domains": [ 00:07:32.886 { 00:07:32.886 "dma_device_id": "system", 00:07:32.886 "dma_device_type": 1 00:07:32.886 }, 00:07:32.886 { 00:07:32.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.886 "dma_device_type": 2 00:07:32.886 } 00:07:32.886 ], 00:07:32.886 "driver_specific": {} 00:07:32.886 } 00:07:32.886 ] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.886 "name": "Existed_Raid", 00:07:32.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.886 "strip_size_kb": 0, 00:07:32.886 "state": "configuring", 00:07:32.886 "raid_level": "raid1", 00:07:32.886 "superblock": false, 00:07:32.886 "num_base_bdevs": 2, 00:07:32.886 "num_base_bdevs_discovered": 1, 00:07:32.886 "num_base_bdevs_operational": 2, 00:07:32.886 "base_bdevs_list": [ 00:07:32.886 { 00:07:32.886 "name": "BaseBdev1", 00:07:32.886 "uuid": "576cf270-9879-4ddf-806e-2a5fb7d7fffc", 00:07:32.886 "is_configured": true, 00:07:32.886 "data_offset": 0, 00:07:32.886 "data_size": 65536 00:07:32.886 }, 00:07:32.886 { 00:07:32.886 "name": "BaseBdev2", 00:07:32.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.886 "is_configured": false, 00:07:32.886 "data_offset": 0, 00:07:32.886 "data_size": 0 00:07:32.886 } 00:07:32.886 ] 00:07:32.886 }' 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.886 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.456 [2024-11-20 03:14:22.785800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.456 [2024-11-20 03:14:22.785854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.456 [2024-11-20 03:14:22.797817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.456 [2024-11-20 03:14:22.799687] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.456 [2024-11-20 03:14:22.799778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.456 "name": "Existed_Raid", 00:07:33.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.456 "strip_size_kb": 0, 00:07:33.456 "state": "configuring", 00:07:33.456 "raid_level": "raid1", 00:07:33.456 "superblock": false, 00:07:33.456 "num_base_bdevs": 2, 00:07:33.456 "num_base_bdevs_discovered": 1, 00:07:33.456 "num_base_bdevs_operational": 2, 00:07:33.456 "base_bdevs_list": [ 00:07:33.456 { 00:07:33.456 "name": "BaseBdev1", 00:07:33.456 "uuid": "576cf270-9879-4ddf-806e-2a5fb7d7fffc", 00:07:33.456 "is_configured": true, 00:07:33.456 "data_offset": 0, 00:07:33.456 "data_size": 65536 00:07:33.456 }, 00:07:33.456 { 00:07:33.456 "name": "BaseBdev2", 00:07:33.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.456 "is_configured": false, 00:07:33.456 "data_offset": 0, 00:07:33.456 "data_size": 0 00:07:33.456 } 00:07:33.456 ] 00:07:33.456 }' 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.456 03:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.716 [2024-11-20 03:14:23.192371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.716 [2024-11-20 03:14:23.192489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.716 [2024-11-20 03:14:23.192502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:33.716 [2024-11-20 03:14:23.192790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.716 [2024-11-20 03:14:23.192950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.716 [2024-11-20 03:14:23.192965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:33.716 [2024-11-20 03:14:23.193215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.716 BaseBdev2 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.716 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.716 [ 00:07:33.716 { 00:07:33.716 "name": "BaseBdev2", 00:07:33.716 "aliases": [ 00:07:33.716 "ba760c2c-5bcc-4204-ae32-51ee2d909871" 00:07:33.716 ], 00:07:33.716 "product_name": "Malloc disk", 00:07:33.716 "block_size": 512, 00:07:33.716 "num_blocks": 65536, 00:07:33.716 "uuid": "ba760c2c-5bcc-4204-ae32-51ee2d909871", 00:07:33.716 "assigned_rate_limits": { 00:07:33.716 "rw_ios_per_sec": 0, 00:07:33.716 "rw_mbytes_per_sec": 0, 00:07:33.717 "r_mbytes_per_sec": 0, 00:07:33.717 "w_mbytes_per_sec": 0 00:07:33.717 }, 00:07:33.717 "claimed": true, 00:07:33.717 "claim_type": "exclusive_write", 00:07:33.717 "zoned": false, 00:07:33.717 "supported_io_types": { 00:07:33.717 "read": true, 00:07:33.717 "write": true, 00:07:33.717 "unmap": true, 00:07:33.717 "flush": true, 00:07:33.717 "reset": true, 00:07:33.717 "nvme_admin": false, 00:07:33.717 "nvme_io": false, 00:07:33.717 "nvme_io_md": false, 00:07:33.717 "write_zeroes": true, 00:07:33.717 "zcopy": true, 00:07:33.717 "get_zone_info": false, 00:07:33.717 "zone_management": false, 00:07:33.717 "zone_append": false, 00:07:33.717 "compare": false, 00:07:33.717 "compare_and_write": false, 00:07:33.717 "abort": true, 00:07:33.717 "seek_hole": false, 00:07:33.717 "seek_data": false, 00:07:33.717 "copy": true, 00:07:33.717 "nvme_iov_md": false 00:07:33.717 }, 00:07:33.717 "memory_domains": [ 00:07:33.717 { 00:07:33.717 "dma_device_id": "system", 00:07:33.717 "dma_device_type": 1 00:07:33.717 }, 00:07:33.717 { 00:07:33.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.717 "dma_device_type": 2 00:07:33.717 } 00:07:33.717 ], 00:07:33.717 "driver_specific": {} 00:07:33.717 } 00:07:33.717 ] 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.717 "name": "Existed_Raid", 00:07:33.717 "uuid": "24a1355c-917c-4b75-b706-00f1dd48b345", 00:07:33.717 "strip_size_kb": 0, 00:07:33.717 "state": "online", 00:07:33.717 "raid_level": "raid1", 00:07:33.717 "superblock": false, 00:07:33.717 "num_base_bdevs": 2, 00:07:33.717 "num_base_bdevs_discovered": 2, 00:07:33.717 "num_base_bdevs_operational": 2, 00:07:33.717 "base_bdevs_list": [ 00:07:33.717 { 00:07:33.717 "name": "BaseBdev1", 00:07:33.717 "uuid": "576cf270-9879-4ddf-806e-2a5fb7d7fffc", 00:07:33.717 "is_configured": true, 00:07:33.717 "data_offset": 0, 00:07:33.717 "data_size": 65536 00:07:33.717 }, 00:07:33.717 { 00:07:33.717 "name": "BaseBdev2", 00:07:33.717 "uuid": "ba760c2c-5bcc-4204-ae32-51ee2d909871", 00:07:33.717 "is_configured": true, 00:07:33.717 "data_offset": 0, 00:07:33.717 "data_size": 65536 00:07:33.717 } 00:07:33.717 ] 00:07:33.717 }' 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.717 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.287 [2024-11-20 03:14:23.679894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.287 "name": "Existed_Raid", 00:07:34.287 "aliases": [ 00:07:34.287 "24a1355c-917c-4b75-b706-00f1dd48b345" 00:07:34.287 ], 00:07:34.287 "product_name": "Raid Volume", 00:07:34.287 "block_size": 512, 00:07:34.287 "num_blocks": 65536, 00:07:34.287 "uuid": "24a1355c-917c-4b75-b706-00f1dd48b345", 00:07:34.287 "assigned_rate_limits": { 00:07:34.287 "rw_ios_per_sec": 0, 00:07:34.287 "rw_mbytes_per_sec": 0, 00:07:34.287 "r_mbytes_per_sec": 0, 00:07:34.287 "w_mbytes_per_sec": 0 00:07:34.287 }, 00:07:34.287 "claimed": false, 00:07:34.287 "zoned": false, 00:07:34.287 "supported_io_types": { 00:07:34.287 "read": true, 00:07:34.287 "write": true, 00:07:34.287 "unmap": false, 00:07:34.287 "flush": false, 00:07:34.287 "reset": true, 00:07:34.287 "nvme_admin": false, 00:07:34.287 "nvme_io": false, 00:07:34.287 "nvme_io_md": false, 00:07:34.287 "write_zeroes": true, 00:07:34.287 "zcopy": false, 00:07:34.287 "get_zone_info": false, 00:07:34.287 "zone_management": false, 00:07:34.287 "zone_append": false, 00:07:34.287 "compare": false, 00:07:34.287 "compare_and_write": false, 00:07:34.287 "abort": false, 00:07:34.287 "seek_hole": false, 00:07:34.287 "seek_data": false, 00:07:34.287 "copy": false, 00:07:34.287 "nvme_iov_md": false 00:07:34.287 }, 00:07:34.287 "memory_domains": [ 00:07:34.287 { 00:07:34.287 "dma_device_id": "system", 00:07:34.287 "dma_device_type": 1 00:07:34.287 }, 00:07:34.287 { 00:07:34.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.287 "dma_device_type": 2 00:07:34.287 }, 00:07:34.287 { 00:07:34.287 "dma_device_id": "system", 00:07:34.287 "dma_device_type": 1 00:07:34.287 }, 00:07:34.287 { 00:07:34.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.287 "dma_device_type": 2 00:07:34.287 } 00:07:34.287 ], 00:07:34.287 "driver_specific": { 00:07:34.287 "raid": { 00:07:34.287 "uuid": "24a1355c-917c-4b75-b706-00f1dd48b345", 00:07:34.287 "strip_size_kb": 0, 00:07:34.287 "state": "online", 00:07:34.287 "raid_level": "raid1", 00:07:34.287 "superblock": false, 00:07:34.287 "num_base_bdevs": 2, 00:07:34.287 "num_base_bdevs_discovered": 2, 00:07:34.287 "num_base_bdevs_operational": 2, 00:07:34.287 "base_bdevs_list": [ 00:07:34.287 { 00:07:34.287 "name": "BaseBdev1", 00:07:34.287 "uuid": "576cf270-9879-4ddf-806e-2a5fb7d7fffc", 00:07:34.287 "is_configured": true, 00:07:34.287 "data_offset": 0, 00:07:34.287 "data_size": 65536 00:07:34.287 }, 00:07:34.287 { 00:07:34.287 "name": "BaseBdev2", 00:07:34.287 "uuid": "ba760c2c-5bcc-4204-ae32-51ee2d909871", 00:07:34.287 "is_configured": true, 00:07:34.287 "data_offset": 0, 00:07:34.287 "data_size": 65536 00:07:34.287 } 00:07:34.287 ] 00:07:34.287 } 00:07:34.287 } 00:07:34.287 }' 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.287 BaseBdev2' 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.287 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.288 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.288 [2024-11-20 03:14:23.911347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.548 "name": "Existed_Raid", 00:07:34.548 "uuid": "24a1355c-917c-4b75-b706-00f1dd48b345", 00:07:34.548 "strip_size_kb": 0, 00:07:34.548 "state": "online", 00:07:34.548 "raid_level": "raid1", 00:07:34.548 "superblock": false, 00:07:34.548 "num_base_bdevs": 2, 00:07:34.548 "num_base_bdevs_discovered": 1, 00:07:34.548 "num_base_bdevs_operational": 1, 00:07:34.548 "base_bdevs_list": [ 00:07:34.548 { 00:07:34.548 "name": null, 00:07:34.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.548 "is_configured": false, 00:07:34.548 "data_offset": 0, 00:07:34.548 "data_size": 65536 00:07:34.548 }, 00:07:34.548 { 00:07:34.548 "name": "BaseBdev2", 00:07:34.548 "uuid": "ba760c2c-5bcc-4204-ae32-51ee2d909871", 00:07:34.548 "is_configured": true, 00:07:34.548 "data_offset": 0, 00:07:34.548 "data_size": 65536 00:07:34.548 } 00:07:34.548 ] 00:07:34.548 }' 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.548 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 [2024-11-20 03:14:24.546420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.118 [2024-11-20 03:14:24.546517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.118 [2024-11-20 03:14:24.642247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.118 [2024-11-20 03:14:24.642384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.118 [2024-11-20 03:14:24.642426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62570 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62570 ']' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62570 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62570 00:07:35.118 killing process with pid 62570 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62570' 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62570 00:07:35.118 [2024-11-20 03:14:24.737140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.118 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62570 00:07:35.378 [2024-11-20 03:14:24.754258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.317 00:07:36.317 real 0m4.951s 00:07:36.317 user 0m7.119s 00:07:36.317 sys 0m0.831s 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.317 ************************************ 00:07:36.317 END TEST raid_state_function_test 00:07:36.317 ************************************ 00:07:36.317 03:14:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:36.317 03:14:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.317 03:14:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.317 03:14:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.317 ************************************ 00:07:36.317 START TEST raid_state_function_test_sb 00:07:36.317 ************************************ 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:36.317 Process raid pid: 62823 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62823 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62823' 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62823 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62823 ']' 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.317 03:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.577 [2024-11-20 03:14:26.013980] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:36.577 [2024-11-20 03:14:26.014195] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.577 [2024-11-20 03:14:26.190963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.836 [2024-11-20 03:14:26.303920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.096 [2024-11-20 03:14:26.513747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.096 [2024-11-20 03:14:26.513856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.362 [2024-11-20 03:14:26.855475] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.362 [2024-11-20 03:14:26.855534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.362 [2024-11-20 03:14:26.855545] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.362 [2024-11-20 03:14:26.855555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.362 "name": "Existed_Raid", 00:07:37.362 "uuid": "dfb6947b-d765-4246-a33b-6a917385dde8", 00:07:37.362 "strip_size_kb": 0, 00:07:37.362 "state": "configuring", 00:07:37.362 "raid_level": "raid1", 00:07:37.362 "superblock": true, 00:07:37.362 "num_base_bdevs": 2, 00:07:37.362 "num_base_bdevs_discovered": 0, 00:07:37.362 "num_base_bdevs_operational": 2, 00:07:37.362 "base_bdevs_list": [ 00:07:37.362 { 00:07:37.362 "name": "BaseBdev1", 00:07:37.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.362 "is_configured": false, 00:07:37.362 "data_offset": 0, 00:07:37.362 "data_size": 0 00:07:37.362 }, 00:07:37.362 { 00:07:37.362 "name": "BaseBdev2", 00:07:37.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.362 "is_configured": false, 00:07:37.362 "data_offset": 0, 00:07:37.362 "data_size": 0 00:07:37.362 } 00:07:37.362 ] 00:07:37.362 }' 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.362 03:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.942 [2024-11-20 03:14:27.334680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.942 [2024-11-20 03:14:27.334718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.942 [2024-11-20 03:14:27.346633] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.942 [2024-11-20 03:14:27.346691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.942 [2024-11-20 03:14:27.346700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.942 [2024-11-20 03:14:27.346712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.942 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.942 [2024-11-20 03:14:27.393482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.943 BaseBdev1 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.943 [ 00:07:37.943 { 00:07:37.943 "name": "BaseBdev1", 00:07:37.943 "aliases": [ 00:07:37.943 "73522d2f-4ccb-4955-b009-7aada620dfb8" 00:07:37.943 ], 00:07:37.943 "product_name": "Malloc disk", 00:07:37.943 "block_size": 512, 00:07:37.943 "num_blocks": 65536, 00:07:37.943 "uuid": "73522d2f-4ccb-4955-b009-7aada620dfb8", 00:07:37.943 "assigned_rate_limits": { 00:07:37.943 "rw_ios_per_sec": 0, 00:07:37.943 "rw_mbytes_per_sec": 0, 00:07:37.943 "r_mbytes_per_sec": 0, 00:07:37.943 "w_mbytes_per_sec": 0 00:07:37.943 }, 00:07:37.943 "claimed": true, 00:07:37.943 "claim_type": "exclusive_write", 00:07:37.943 "zoned": false, 00:07:37.943 "supported_io_types": { 00:07:37.943 "read": true, 00:07:37.943 "write": true, 00:07:37.943 "unmap": true, 00:07:37.943 "flush": true, 00:07:37.943 "reset": true, 00:07:37.943 "nvme_admin": false, 00:07:37.943 "nvme_io": false, 00:07:37.943 "nvme_io_md": false, 00:07:37.943 "write_zeroes": true, 00:07:37.943 "zcopy": true, 00:07:37.943 "get_zone_info": false, 00:07:37.943 "zone_management": false, 00:07:37.943 "zone_append": false, 00:07:37.943 "compare": false, 00:07:37.943 "compare_and_write": false, 00:07:37.943 "abort": true, 00:07:37.943 "seek_hole": false, 00:07:37.943 "seek_data": false, 00:07:37.943 "copy": true, 00:07:37.943 "nvme_iov_md": false 00:07:37.943 }, 00:07:37.943 "memory_domains": [ 00:07:37.943 { 00:07:37.943 "dma_device_id": "system", 00:07:37.943 "dma_device_type": 1 00:07:37.943 }, 00:07:37.943 { 00:07:37.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.943 "dma_device_type": 2 00:07:37.943 } 00:07:37.943 ], 00:07:37.943 "driver_specific": {} 00:07:37.943 } 00:07:37.943 ] 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.943 "name": "Existed_Raid", 00:07:37.943 "uuid": "d30ca50d-6a92-4f70-9e61-dbb078024ef2", 00:07:37.943 "strip_size_kb": 0, 00:07:37.943 "state": "configuring", 00:07:37.943 "raid_level": "raid1", 00:07:37.943 "superblock": true, 00:07:37.943 "num_base_bdevs": 2, 00:07:37.943 "num_base_bdevs_discovered": 1, 00:07:37.943 "num_base_bdevs_operational": 2, 00:07:37.943 "base_bdevs_list": [ 00:07:37.943 { 00:07:37.943 "name": "BaseBdev1", 00:07:37.943 "uuid": "73522d2f-4ccb-4955-b009-7aada620dfb8", 00:07:37.943 "is_configured": true, 00:07:37.943 "data_offset": 2048, 00:07:37.943 "data_size": 63488 00:07:37.943 }, 00:07:37.943 { 00:07:37.943 "name": "BaseBdev2", 00:07:37.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.943 "is_configured": false, 00:07:37.943 "data_offset": 0, 00:07:37.943 "data_size": 0 00:07:37.943 } 00:07:37.943 ] 00:07:37.943 }' 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.943 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.513 [2024-11-20 03:14:27.840773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.513 [2024-11-20 03:14:27.840825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.513 [2024-11-20 03:14:27.852792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.513 [2024-11-20 03:14:27.854701] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.513 [2024-11-20 03:14:27.854779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.513 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.514 "name": "Existed_Raid", 00:07:38.514 "uuid": "e06a8773-5afd-4494-bda2-0b604db5e3d1", 00:07:38.514 "strip_size_kb": 0, 00:07:38.514 "state": "configuring", 00:07:38.514 "raid_level": "raid1", 00:07:38.514 "superblock": true, 00:07:38.514 "num_base_bdevs": 2, 00:07:38.514 "num_base_bdevs_discovered": 1, 00:07:38.514 "num_base_bdevs_operational": 2, 00:07:38.514 "base_bdevs_list": [ 00:07:38.514 { 00:07:38.514 "name": "BaseBdev1", 00:07:38.514 "uuid": "73522d2f-4ccb-4955-b009-7aada620dfb8", 00:07:38.514 "is_configured": true, 00:07:38.514 "data_offset": 2048, 00:07:38.514 "data_size": 63488 00:07:38.514 }, 00:07:38.514 { 00:07:38.514 "name": "BaseBdev2", 00:07:38.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.514 "is_configured": false, 00:07:38.514 "data_offset": 0, 00:07:38.514 "data_size": 0 00:07:38.514 } 00:07:38.514 ] 00:07:38.514 }' 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.514 03:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.773 [2024-11-20 03:14:28.310061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.773 [2024-11-20 03:14:28.310345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:38.773 [2024-11-20 03:14:28.310361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:38.773 [2024-11-20 03:14:28.310641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:38.773 BaseBdev2 00:07:38.773 [2024-11-20 03:14:28.310822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:38.773 [2024-11-20 03:14:28.310842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:38.773 [2024-11-20 03:14:28.310993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.773 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.774 [ 00:07:38.774 { 00:07:38.774 "name": "BaseBdev2", 00:07:38.774 "aliases": [ 00:07:38.774 "f8a47afb-ae9f-4f77-897d-21c8e535eae0" 00:07:38.774 ], 00:07:38.774 "product_name": "Malloc disk", 00:07:38.774 "block_size": 512, 00:07:38.774 "num_blocks": 65536, 00:07:38.774 "uuid": "f8a47afb-ae9f-4f77-897d-21c8e535eae0", 00:07:38.774 "assigned_rate_limits": { 00:07:38.774 "rw_ios_per_sec": 0, 00:07:38.774 "rw_mbytes_per_sec": 0, 00:07:38.774 "r_mbytes_per_sec": 0, 00:07:38.774 "w_mbytes_per_sec": 0 00:07:38.774 }, 00:07:38.774 "claimed": true, 00:07:38.774 "claim_type": "exclusive_write", 00:07:38.774 "zoned": false, 00:07:38.774 "supported_io_types": { 00:07:38.774 "read": true, 00:07:38.774 "write": true, 00:07:38.774 "unmap": true, 00:07:38.774 "flush": true, 00:07:38.774 "reset": true, 00:07:38.774 "nvme_admin": false, 00:07:38.774 "nvme_io": false, 00:07:38.774 "nvme_io_md": false, 00:07:38.774 "write_zeroes": true, 00:07:38.774 "zcopy": true, 00:07:38.774 "get_zone_info": false, 00:07:38.774 "zone_management": false, 00:07:38.774 "zone_append": false, 00:07:38.774 "compare": false, 00:07:38.774 "compare_and_write": false, 00:07:38.774 "abort": true, 00:07:38.774 "seek_hole": false, 00:07:38.774 "seek_data": false, 00:07:38.774 "copy": true, 00:07:38.774 "nvme_iov_md": false 00:07:38.774 }, 00:07:38.774 "memory_domains": [ 00:07:38.774 { 00:07:38.774 "dma_device_id": "system", 00:07:38.774 "dma_device_type": 1 00:07:38.774 }, 00:07:38.774 { 00:07:38.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.774 "dma_device_type": 2 00:07:38.774 } 00:07:38.774 ], 00:07:38.774 "driver_specific": {} 00:07:38.774 } 00:07:38.774 ] 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.774 "name": "Existed_Raid", 00:07:38.774 "uuid": "e06a8773-5afd-4494-bda2-0b604db5e3d1", 00:07:38.774 "strip_size_kb": 0, 00:07:38.774 "state": "online", 00:07:38.774 "raid_level": "raid1", 00:07:38.774 "superblock": true, 00:07:38.774 "num_base_bdevs": 2, 00:07:38.774 "num_base_bdevs_discovered": 2, 00:07:38.774 "num_base_bdevs_operational": 2, 00:07:38.774 "base_bdevs_list": [ 00:07:38.774 { 00:07:38.774 "name": "BaseBdev1", 00:07:38.774 "uuid": "73522d2f-4ccb-4955-b009-7aada620dfb8", 00:07:38.774 "is_configured": true, 00:07:38.774 "data_offset": 2048, 00:07:38.774 "data_size": 63488 00:07:38.774 }, 00:07:38.774 { 00:07:38.774 "name": "BaseBdev2", 00:07:38.774 "uuid": "f8a47afb-ae9f-4f77-897d-21c8e535eae0", 00:07:38.774 "is_configured": true, 00:07:38.774 "data_offset": 2048, 00:07:38.774 "data_size": 63488 00:07:38.774 } 00:07:38.774 ] 00:07:38.774 }' 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.774 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.343 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.343 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.343 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.344 [2024-11-20 03:14:28.753652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.344 "name": "Existed_Raid", 00:07:39.344 "aliases": [ 00:07:39.344 "e06a8773-5afd-4494-bda2-0b604db5e3d1" 00:07:39.344 ], 00:07:39.344 "product_name": "Raid Volume", 00:07:39.344 "block_size": 512, 00:07:39.344 "num_blocks": 63488, 00:07:39.344 "uuid": "e06a8773-5afd-4494-bda2-0b604db5e3d1", 00:07:39.344 "assigned_rate_limits": { 00:07:39.344 "rw_ios_per_sec": 0, 00:07:39.344 "rw_mbytes_per_sec": 0, 00:07:39.344 "r_mbytes_per_sec": 0, 00:07:39.344 "w_mbytes_per_sec": 0 00:07:39.344 }, 00:07:39.344 "claimed": false, 00:07:39.344 "zoned": false, 00:07:39.344 "supported_io_types": { 00:07:39.344 "read": true, 00:07:39.344 "write": true, 00:07:39.344 "unmap": false, 00:07:39.344 "flush": false, 00:07:39.344 "reset": true, 00:07:39.344 "nvme_admin": false, 00:07:39.344 "nvme_io": false, 00:07:39.344 "nvme_io_md": false, 00:07:39.344 "write_zeroes": true, 00:07:39.344 "zcopy": false, 00:07:39.344 "get_zone_info": false, 00:07:39.344 "zone_management": false, 00:07:39.344 "zone_append": false, 00:07:39.344 "compare": false, 00:07:39.344 "compare_and_write": false, 00:07:39.344 "abort": false, 00:07:39.344 "seek_hole": false, 00:07:39.344 "seek_data": false, 00:07:39.344 "copy": false, 00:07:39.344 "nvme_iov_md": false 00:07:39.344 }, 00:07:39.344 "memory_domains": [ 00:07:39.344 { 00:07:39.344 "dma_device_id": "system", 00:07:39.344 "dma_device_type": 1 00:07:39.344 }, 00:07:39.344 { 00:07:39.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.344 "dma_device_type": 2 00:07:39.344 }, 00:07:39.344 { 00:07:39.344 "dma_device_id": "system", 00:07:39.344 "dma_device_type": 1 00:07:39.344 }, 00:07:39.344 { 00:07:39.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.344 "dma_device_type": 2 00:07:39.344 } 00:07:39.344 ], 00:07:39.344 "driver_specific": { 00:07:39.344 "raid": { 00:07:39.344 "uuid": "e06a8773-5afd-4494-bda2-0b604db5e3d1", 00:07:39.344 "strip_size_kb": 0, 00:07:39.344 "state": "online", 00:07:39.344 "raid_level": "raid1", 00:07:39.344 "superblock": true, 00:07:39.344 "num_base_bdevs": 2, 00:07:39.344 "num_base_bdevs_discovered": 2, 00:07:39.344 "num_base_bdevs_operational": 2, 00:07:39.344 "base_bdevs_list": [ 00:07:39.344 { 00:07:39.344 "name": "BaseBdev1", 00:07:39.344 "uuid": "73522d2f-4ccb-4955-b009-7aada620dfb8", 00:07:39.344 "is_configured": true, 00:07:39.344 "data_offset": 2048, 00:07:39.344 "data_size": 63488 00:07:39.344 }, 00:07:39.344 { 00:07:39.344 "name": "BaseBdev2", 00:07:39.344 "uuid": "f8a47afb-ae9f-4f77-897d-21c8e535eae0", 00:07:39.344 "is_configured": true, 00:07:39.344 "data_offset": 2048, 00:07:39.344 "data_size": 63488 00:07:39.344 } 00:07:39.344 ] 00:07:39.344 } 00:07:39.344 } 00:07:39.344 }' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.344 BaseBdev2' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.344 03:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 [2024-11-20 03:14:28.977022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.605 "name": "Existed_Raid", 00:07:39.605 "uuid": "e06a8773-5afd-4494-bda2-0b604db5e3d1", 00:07:39.605 "strip_size_kb": 0, 00:07:39.605 "state": "online", 00:07:39.605 "raid_level": "raid1", 00:07:39.605 "superblock": true, 00:07:39.605 "num_base_bdevs": 2, 00:07:39.605 "num_base_bdevs_discovered": 1, 00:07:39.605 "num_base_bdevs_operational": 1, 00:07:39.605 "base_bdevs_list": [ 00:07:39.605 { 00:07:39.605 "name": null, 00:07:39.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.605 "is_configured": false, 00:07:39.605 "data_offset": 0, 00:07:39.605 "data_size": 63488 00:07:39.605 }, 00:07:39.605 { 00:07:39.605 "name": "BaseBdev2", 00:07:39.605 "uuid": "f8a47afb-ae9f-4f77-897d-21c8e535eae0", 00:07:39.605 "is_configured": true, 00:07:39.605 "data_offset": 2048, 00:07:39.605 "data_size": 63488 00:07:39.605 } 00:07:39.605 ] 00:07:39.605 }' 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.605 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.174 [2024-11-20 03:14:29.601400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.174 [2024-11-20 03:14:29.601511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.174 [2024-11-20 03:14:29.698780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.174 [2024-11-20 03:14:29.698837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.174 [2024-11-20 03:14:29.698850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62823 00:07:40.174 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62823 ']' 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62823 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62823 00:07:40.175 killing process with pid 62823 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62823' 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62823 00:07:40.175 [2024-11-20 03:14:29.783463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.175 03:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62823 00:07:40.175 [2024-11-20 03:14:29.800119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.558 03:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:41.558 00:07:41.558 real 0m4.973s 00:07:41.558 user 0m7.217s 00:07:41.558 sys 0m0.798s 00:07:41.558 03:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.558 03:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.558 ************************************ 00:07:41.558 END TEST raid_state_function_test_sb 00:07:41.558 ************************************ 00:07:41.558 03:14:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:41.558 03:14:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:41.558 03:14:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.558 03:14:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.558 ************************************ 00:07:41.558 START TEST raid_superblock_test 00:07:41.558 ************************************ 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63075 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63075 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63075 ']' 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.558 03:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.558 [2024-11-20 03:14:31.041830] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:41.558 [2024-11-20 03:14:31.041955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63075 ] 00:07:41.818 [2024-11-20 03:14:31.211903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.818 [2024-11-20 03:14:31.323031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.078 [2024-11-20 03:14:31.519058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.078 [2024-11-20 03:14:31.519120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.338 malloc1 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.338 [2024-11-20 03:14:31.943964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:42.338 [2024-11-20 03:14:31.944055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.338 [2024-11-20 03:14:31.944079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:42.338 [2024-11-20 03:14:31.944089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.338 [2024-11-20 03:14:31.946201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.338 [2024-11-20 03:14:31.946237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:42.338 pt1 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.338 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.599 malloc2 00:07:42.599 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.599 03:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:42.599 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.599 03:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.599 [2024-11-20 03:14:31.998543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:42.599 [2024-11-20 03:14:31.998617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.599 [2024-11-20 03:14:31.998647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:42.599 [2024-11-20 03:14:31.998656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.599 [2024-11-20 03:14:32.000713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.599 [2024-11-20 03:14:32.000747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:42.599 pt2 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.599 [2024-11-20 03:14:32.010573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:42.599 [2024-11-20 03:14:32.012362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:42.599 [2024-11-20 03:14:32.012521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:42.599 [2024-11-20 03:14:32.012544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:42.599 [2024-11-20 03:14:32.012785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.599 [2024-11-20 03:14:32.012952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:42.599 [2024-11-20 03:14:32.012974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:42.599 [2024-11-20 03:14:32.013130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.599 "name": "raid_bdev1", 00:07:42.599 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:42.599 "strip_size_kb": 0, 00:07:42.599 "state": "online", 00:07:42.599 "raid_level": "raid1", 00:07:42.599 "superblock": true, 00:07:42.599 "num_base_bdevs": 2, 00:07:42.599 "num_base_bdevs_discovered": 2, 00:07:42.599 "num_base_bdevs_operational": 2, 00:07:42.599 "base_bdevs_list": [ 00:07:42.599 { 00:07:42.599 "name": "pt1", 00:07:42.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.599 "is_configured": true, 00:07:42.599 "data_offset": 2048, 00:07:42.599 "data_size": 63488 00:07:42.599 }, 00:07:42.599 { 00:07:42.599 "name": "pt2", 00:07:42.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.599 "is_configured": true, 00:07:42.599 "data_offset": 2048, 00:07:42.599 "data_size": 63488 00:07:42.599 } 00:07:42.599 ] 00:07:42.599 }' 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.599 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.859 [2024-11-20 03:14:32.462061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.859 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.120 "name": "raid_bdev1", 00:07:43.120 "aliases": [ 00:07:43.120 "230cdfab-6b31-4623-bc98-dab1af1f4789" 00:07:43.120 ], 00:07:43.120 "product_name": "Raid Volume", 00:07:43.120 "block_size": 512, 00:07:43.120 "num_blocks": 63488, 00:07:43.120 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:43.120 "assigned_rate_limits": { 00:07:43.120 "rw_ios_per_sec": 0, 00:07:43.120 "rw_mbytes_per_sec": 0, 00:07:43.120 "r_mbytes_per_sec": 0, 00:07:43.120 "w_mbytes_per_sec": 0 00:07:43.120 }, 00:07:43.120 "claimed": false, 00:07:43.120 "zoned": false, 00:07:43.120 "supported_io_types": { 00:07:43.120 "read": true, 00:07:43.120 "write": true, 00:07:43.120 "unmap": false, 00:07:43.120 "flush": false, 00:07:43.120 "reset": true, 00:07:43.120 "nvme_admin": false, 00:07:43.120 "nvme_io": false, 00:07:43.120 "nvme_io_md": false, 00:07:43.120 "write_zeroes": true, 00:07:43.120 "zcopy": false, 00:07:43.120 "get_zone_info": false, 00:07:43.120 "zone_management": false, 00:07:43.120 "zone_append": false, 00:07:43.120 "compare": false, 00:07:43.120 "compare_and_write": false, 00:07:43.120 "abort": false, 00:07:43.120 "seek_hole": false, 00:07:43.120 "seek_data": false, 00:07:43.120 "copy": false, 00:07:43.120 "nvme_iov_md": false 00:07:43.120 }, 00:07:43.120 "memory_domains": [ 00:07:43.120 { 00:07:43.120 "dma_device_id": "system", 00:07:43.120 "dma_device_type": 1 00:07:43.120 }, 00:07:43.120 { 00:07:43.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.120 "dma_device_type": 2 00:07:43.120 }, 00:07:43.120 { 00:07:43.120 "dma_device_id": "system", 00:07:43.120 "dma_device_type": 1 00:07:43.120 }, 00:07:43.120 { 00:07:43.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.120 "dma_device_type": 2 00:07:43.120 } 00:07:43.120 ], 00:07:43.120 "driver_specific": { 00:07:43.120 "raid": { 00:07:43.120 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:43.120 "strip_size_kb": 0, 00:07:43.120 "state": "online", 00:07:43.120 "raid_level": "raid1", 00:07:43.120 "superblock": true, 00:07:43.120 "num_base_bdevs": 2, 00:07:43.120 "num_base_bdevs_discovered": 2, 00:07:43.120 "num_base_bdevs_operational": 2, 00:07:43.120 "base_bdevs_list": [ 00:07:43.120 { 00:07:43.120 "name": "pt1", 00:07:43.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.120 "is_configured": true, 00:07:43.120 "data_offset": 2048, 00:07:43.120 "data_size": 63488 00:07:43.120 }, 00:07:43.120 { 00:07:43.120 "name": "pt2", 00:07:43.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.120 "is_configured": true, 00:07:43.120 "data_offset": 2048, 00:07:43.120 "data_size": 63488 00:07:43.120 } 00:07:43.120 ] 00:07:43.120 } 00:07:43.120 } 00:07:43.120 }' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:43.120 pt2' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 [2024-11-20 03:14:32.677700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=230cdfab-6b31-4623-bc98-dab1af1f4789 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 230cdfab-6b31-4623-bc98-dab1af1f4789 ']' 00:07:43.120 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.121 [2024-11-20 03:14:32.721308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.121 [2024-11-20 03:14:32.721335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.121 [2024-11-20 03:14:32.721433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.121 [2024-11-20 03:14:32.721494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.121 [2024-11-20 03:14:32.721508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.121 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 [2024-11-20 03:14:32.849116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:43.382 [2024-11-20 03:14:32.851033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:43.382 [2024-11-20 03:14:32.851105] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:43.382 [2024-11-20 03:14:32.851162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:43.382 [2024-11-20 03:14:32.851178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.382 [2024-11-20 03:14:32.851191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:43.382 request: 00:07:43.382 { 00:07:43.382 "name": "raid_bdev1", 00:07:43.382 "raid_level": "raid1", 00:07:43.382 "base_bdevs": [ 00:07:43.382 "malloc1", 00:07:43.382 "malloc2" 00:07:43.382 ], 00:07:43.382 "superblock": false, 00:07:43.382 "method": "bdev_raid_create", 00:07:43.382 "req_id": 1 00:07:43.382 } 00:07:43.382 Got JSON-RPC error response 00:07:43.382 response: 00:07:43.382 { 00:07:43.382 "code": -17, 00:07:43.382 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:43.382 } 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 [2024-11-20 03:14:32.936972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.382 [2024-11-20 03:14:32.937042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.382 [2024-11-20 03:14:32.937060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:43.382 [2024-11-20 03:14:32.937071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.382 [2024-11-20 03:14:32.939342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.382 [2024-11-20 03:14:32.939386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.382 [2024-11-20 03:14:32.939476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:43.382 [2024-11-20 03:14:32.939554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:43.382 pt1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.382 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.382 "name": "raid_bdev1", 00:07:43.383 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:43.383 "strip_size_kb": 0, 00:07:43.383 "state": "configuring", 00:07:43.383 "raid_level": "raid1", 00:07:43.383 "superblock": true, 00:07:43.383 "num_base_bdevs": 2, 00:07:43.383 "num_base_bdevs_discovered": 1, 00:07:43.383 "num_base_bdevs_operational": 2, 00:07:43.383 "base_bdevs_list": [ 00:07:43.383 { 00:07:43.383 "name": "pt1", 00:07:43.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.383 "is_configured": true, 00:07:43.383 "data_offset": 2048, 00:07:43.383 "data_size": 63488 00:07:43.383 }, 00:07:43.383 { 00:07:43.383 "name": null, 00:07:43.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.383 "is_configured": false, 00:07:43.383 "data_offset": 2048, 00:07:43.383 "data_size": 63488 00:07:43.383 } 00:07:43.383 ] 00:07:43.383 }' 00:07:43.383 03:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.383 03:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 [2024-11-20 03:14:33.388212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:43.955 [2024-11-20 03:14:33.388299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.955 [2024-11-20 03:14:33.388321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:43.955 [2024-11-20 03:14:33.388332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.955 [2024-11-20 03:14:33.388806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.955 [2024-11-20 03:14:33.388836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:43.955 [2024-11-20 03:14:33.388918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:43.955 [2024-11-20 03:14:33.388947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:43.955 [2024-11-20 03:14:33.389071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:43.955 [2024-11-20 03:14:33.389090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:43.955 [2024-11-20 03:14:33.389324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:43.955 [2024-11-20 03:14:33.389490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:43.955 [2024-11-20 03:14:33.389506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:43.955 [2024-11-20 03:14:33.389667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.955 pt2 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.955 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.955 "name": "raid_bdev1", 00:07:43.955 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:43.955 "strip_size_kb": 0, 00:07:43.955 "state": "online", 00:07:43.955 "raid_level": "raid1", 00:07:43.955 "superblock": true, 00:07:43.955 "num_base_bdevs": 2, 00:07:43.955 "num_base_bdevs_discovered": 2, 00:07:43.955 "num_base_bdevs_operational": 2, 00:07:43.955 "base_bdevs_list": [ 00:07:43.955 { 00:07:43.955 "name": "pt1", 00:07:43.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.955 "is_configured": true, 00:07:43.955 "data_offset": 2048, 00:07:43.955 "data_size": 63488 00:07:43.955 }, 00:07:43.955 { 00:07:43.956 "name": "pt2", 00:07:43.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.956 "is_configured": true, 00:07:43.956 "data_offset": 2048, 00:07:43.956 "data_size": 63488 00:07:43.956 } 00:07:43.956 ] 00:07:43.956 }' 00:07:43.956 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.956 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.216 [2024-11-20 03:14:33.815727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.216 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.216 "name": "raid_bdev1", 00:07:44.216 "aliases": [ 00:07:44.216 "230cdfab-6b31-4623-bc98-dab1af1f4789" 00:07:44.216 ], 00:07:44.216 "product_name": "Raid Volume", 00:07:44.216 "block_size": 512, 00:07:44.216 "num_blocks": 63488, 00:07:44.216 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:44.216 "assigned_rate_limits": { 00:07:44.216 "rw_ios_per_sec": 0, 00:07:44.216 "rw_mbytes_per_sec": 0, 00:07:44.216 "r_mbytes_per_sec": 0, 00:07:44.216 "w_mbytes_per_sec": 0 00:07:44.216 }, 00:07:44.216 "claimed": false, 00:07:44.216 "zoned": false, 00:07:44.216 "supported_io_types": { 00:07:44.216 "read": true, 00:07:44.216 "write": true, 00:07:44.216 "unmap": false, 00:07:44.216 "flush": false, 00:07:44.216 "reset": true, 00:07:44.216 "nvme_admin": false, 00:07:44.216 "nvme_io": false, 00:07:44.216 "nvme_io_md": false, 00:07:44.216 "write_zeroes": true, 00:07:44.216 "zcopy": false, 00:07:44.216 "get_zone_info": false, 00:07:44.216 "zone_management": false, 00:07:44.216 "zone_append": false, 00:07:44.216 "compare": false, 00:07:44.216 "compare_and_write": false, 00:07:44.216 "abort": false, 00:07:44.216 "seek_hole": false, 00:07:44.216 "seek_data": false, 00:07:44.216 "copy": false, 00:07:44.216 "nvme_iov_md": false 00:07:44.216 }, 00:07:44.216 "memory_domains": [ 00:07:44.216 { 00:07:44.216 "dma_device_id": "system", 00:07:44.216 "dma_device_type": 1 00:07:44.216 }, 00:07:44.216 { 00:07:44.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.216 "dma_device_type": 2 00:07:44.216 }, 00:07:44.216 { 00:07:44.216 "dma_device_id": "system", 00:07:44.216 "dma_device_type": 1 00:07:44.216 }, 00:07:44.216 { 00:07:44.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.216 "dma_device_type": 2 00:07:44.216 } 00:07:44.216 ], 00:07:44.216 "driver_specific": { 00:07:44.216 "raid": { 00:07:44.216 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:44.216 "strip_size_kb": 0, 00:07:44.216 "state": "online", 00:07:44.216 "raid_level": "raid1", 00:07:44.216 "superblock": true, 00:07:44.216 "num_base_bdevs": 2, 00:07:44.216 "num_base_bdevs_discovered": 2, 00:07:44.216 "num_base_bdevs_operational": 2, 00:07:44.216 "base_bdevs_list": [ 00:07:44.216 { 00:07:44.216 "name": "pt1", 00:07:44.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.216 "is_configured": true, 00:07:44.216 "data_offset": 2048, 00:07:44.216 "data_size": 63488 00:07:44.216 }, 00:07:44.216 { 00:07:44.216 "name": "pt2", 00:07:44.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.216 "is_configured": true, 00:07:44.216 "data_offset": 2048, 00:07:44.216 "data_size": 63488 00:07:44.216 } 00:07:44.216 ] 00:07:44.216 } 00:07:44.216 } 00:07:44.216 }' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.477 pt2' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.477 03:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.477 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.477 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.477 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.477 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.477 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:44.477 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.477 [2024-11-20 03:14:34.031339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 230cdfab-6b31-4623-bc98-dab1af1f4789 '!=' 230cdfab-6b31-4623-bc98-dab1af1f4789 ']' 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.478 [2024-11-20 03:14:34.079053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.478 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.738 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.738 "name": "raid_bdev1", 00:07:44.738 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:44.738 "strip_size_kb": 0, 00:07:44.738 "state": "online", 00:07:44.738 "raid_level": "raid1", 00:07:44.738 "superblock": true, 00:07:44.738 "num_base_bdevs": 2, 00:07:44.738 "num_base_bdevs_discovered": 1, 00:07:44.738 "num_base_bdevs_operational": 1, 00:07:44.738 "base_bdevs_list": [ 00:07:44.738 { 00:07:44.738 "name": null, 00:07:44.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.738 "is_configured": false, 00:07:44.738 "data_offset": 0, 00:07:44.738 "data_size": 63488 00:07:44.738 }, 00:07:44.738 { 00:07:44.738 "name": "pt2", 00:07:44.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.738 "is_configured": true, 00:07:44.738 "data_offset": 2048, 00:07:44.738 "data_size": 63488 00:07:44.738 } 00:07:44.738 ] 00:07:44.738 }' 00:07:44.738 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.738 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.999 [2024-11-20 03:14:34.446448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.999 [2024-11-20 03:14:34.446481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.999 [2024-11-20 03:14:34.446561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.999 [2024-11-20 03:14:34.446626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.999 [2024-11-20 03:14:34.446639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.999 [2024-11-20 03:14:34.498356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.999 [2024-11-20 03:14:34.498429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.999 [2024-11-20 03:14:34.498450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:44.999 [2024-11-20 03:14:34.498462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.999 [2024-11-20 03:14:34.500680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.999 [2024-11-20 03:14:34.500721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.999 [2024-11-20 03:14:34.500809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:44.999 [2024-11-20 03:14:34.500852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.999 [2024-11-20 03:14:34.500954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:44.999 [2024-11-20 03:14:34.500971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:44.999 [2024-11-20 03:14:34.501217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:44.999 [2024-11-20 03:14:34.501383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:44.999 [2024-11-20 03:14:34.501400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:44.999 [2024-11-20 03:14:34.501544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.999 pt2 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.999 "name": "raid_bdev1", 00:07:44.999 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:44.999 "strip_size_kb": 0, 00:07:44.999 "state": "online", 00:07:44.999 "raid_level": "raid1", 00:07:44.999 "superblock": true, 00:07:44.999 "num_base_bdevs": 2, 00:07:44.999 "num_base_bdevs_discovered": 1, 00:07:44.999 "num_base_bdevs_operational": 1, 00:07:44.999 "base_bdevs_list": [ 00:07:44.999 { 00:07:44.999 "name": null, 00:07:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.999 "is_configured": false, 00:07:44.999 "data_offset": 2048, 00:07:44.999 "data_size": 63488 00:07:44.999 }, 00:07:44.999 { 00:07:44.999 "name": "pt2", 00:07:44.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.999 "is_configured": true, 00:07:44.999 "data_offset": 2048, 00:07:44.999 "data_size": 63488 00:07:44.999 } 00:07:44.999 ] 00:07:44.999 }' 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.999 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.260 [2024-11-20 03:14:34.873684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.260 [2024-11-20 03:14:34.873717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.260 [2024-11-20 03:14:34.873795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.260 [2024-11-20 03:14:34.873848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.260 [2024-11-20 03:14:34.873857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:45.260 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.520 [2024-11-20 03:14:34.925641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.520 [2024-11-20 03:14:34.925714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.520 [2024-11-20 03:14:34.925734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:45.520 [2024-11-20 03:14:34.925743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.520 [2024-11-20 03:14:34.928147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.520 [2024-11-20 03:14:34.928285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.520 [2024-11-20 03:14:34.928400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:45.520 [2024-11-20 03:14:34.928453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.520 [2024-11-20 03:14:34.928638] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:45.520 [2024-11-20 03:14:34.928651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.520 [2024-11-20 03:14:34.928690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:45.520 [2024-11-20 03:14:34.928787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.520 [2024-11-20 03:14:34.928888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:45.520 [2024-11-20 03:14:34.928899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:45.520 [2024-11-20 03:14:34.929182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:45.520 [2024-11-20 03:14:34.929344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:45.520 [2024-11-20 03:14:34.929356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:45.520 [2024-11-20 03:14:34.929542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.520 pt1 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.520 "name": "raid_bdev1", 00:07:45.520 "uuid": "230cdfab-6b31-4623-bc98-dab1af1f4789", 00:07:45.520 "strip_size_kb": 0, 00:07:45.520 "state": "online", 00:07:45.520 "raid_level": "raid1", 00:07:45.520 "superblock": true, 00:07:45.520 "num_base_bdevs": 2, 00:07:45.520 "num_base_bdevs_discovered": 1, 00:07:45.520 "num_base_bdevs_operational": 1, 00:07:45.520 "base_bdevs_list": [ 00:07:45.520 { 00:07:45.520 "name": null, 00:07:45.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.520 "is_configured": false, 00:07:45.520 "data_offset": 2048, 00:07:45.520 "data_size": 63488 00:07:45.520 }, 00:07:45.520 { 00:07:45.520 "name": "pt2", 00:07:45.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.520 "is_configured": true, 00:07:45.520 "data_offset": 2048, 00:07:45.520 "data_size": 63488 00:07:45.520 } 00:07:45.520 ] 00:07:45.520 }' 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.520 03:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 [2024-11-20 03:14:35.393012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.781 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 230cdfab-6b31-4623-bc98-dab1af1f4789 '!=' 230cdfab-6b31-4623-bc98-dab1af1f4789 ']' 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63075 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63075 ']' 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63075 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63075 00:07:46.041 killing process with pid 63075 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63075' 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63075 00:07:46.041 [2024-11-20 03:14:35.448930] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.041 [2024-11-20 03:14:35.449020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.041 [2024-11-20 03:14:35.449067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.041 [2024-11-20 03:14:35.449081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:46.041 03:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63075 00:07:46.041 [2024-11-20 03:14:35.652428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.461 03:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:47.461 ************************************ 00:07:47.461 END TEST raid_superblock_test 00:07:47.461 ************************************ 00:07:47.461 00:07:47.461 real 0m5.784s 00:07:47.461 user 0m8.757s 00:07:47.461 sys 0m0.987s 00:07:47.461 03:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.461 03:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.461 03:14:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:47.461 03:14:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.461 03:14:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.461 03:14:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.461 ************************************ 00:07:47.461 START TEST raid_read_error_test 00:07:47.461 ************************************ 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0eZ1qOajQB 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63400 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63400 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63400 ']' 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.461 03:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.461 [2024-11-20 03:14:36.909153] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:47.461 [2024-11-20 03:14:36.909390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63400 ] 00:07:47.461 [2024-11-20 03:14:37.080779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.720 [2024-11-20 03:14:37.190390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.979 [2024-11-20 03:14:37.391417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.979 [2024-11-20 03:14:37.391466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.239 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.239 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.239 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.239 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.239 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.239 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.239 BaseBdev1_malloc 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.240 true 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.240 [2024-11-20 03:14:37.818321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.240 [2024-11-20 03:14:37.818380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.240 [2024-11-20 03:14:37.818402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:48.240 [2024-11-20 03:14:37.818412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.240 [2024-11-20 03:14:37.820763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.240 [2024-11-20 03:14:37.820853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.240 BaseBdev1 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.240 BaseBdev2_malloc 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.240 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.499 true 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.499 [2024-11-20 03:14:37.883773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.499 [2024-11-20 03:14:37.883830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.499 [2024-11-20 03:14:37.883846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:48.499 [2024-11-20 03:14:37.883858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.499 [2024-11-20 03:14:37.885917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.499 [2024-11-20 03:14:37.885958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.499 BaseBdev2 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.499 [2024-11-20 03:14:37.895801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.499 [2024-11-20 03:14:37.897545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.499 [2024-11-20 03:14:37.897749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:48.499 [2024-11-20 03:14:37.897766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:48.499 [2024-11-20 03:14:37.898000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:48.499 [2024-11-20 03:14:37.898176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:48.499 [2024-11-20 03:14:37.898187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:48.499 [2024-11-20 03:14:37.898330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.499 "name": "raid_bdev1", 00:07:48.499 "uuid": "e617ccbf-5e88-4500-b79f-5a4d93f001ab", 00:07:48.499 "strip_size_kb": 0, 00:07:48.499 "state": "online", 00:07:48.499 "raid_level": "raid1", 00:07:48.499 "superblock": true, 00:07:48.499 "num_base_bdevs": 2, 00:07:48.499 "num_base_bdevs_discovered": 2, 00:07:48.499 "num_base_bdevs_operational": 2, 00:07:48.499 "base_bdevs_list": [ 00:07:48.499 { 00:07:48.499 "name": "BaseBdev1", 00:07:48.499 "uuid": "0ad28231-221b-56ee-a1a0-c270af99a441", 00:07:48.499 "is_configured": true, 00:07:48.499 "data_offset": 2048, 00:07:48.499 "data_size": 63488 00:07:48.499 }, 00:07:48.499 { 00:07:48.499 "name": "BaseBdev2", 00:07:48.499 "uuid": "99a69e43-73f5-55d3-8b68-4f9ceb88f2fb", 00:07:48.499 "is_configured": true, 00:07:48.499 "data_offset": 2048, 00:07:48.499 "data_size": 63488 00:07:48.499 } 00:07:48.499 ] 00:07:48.499 }' 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.499 03:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.757 03:14:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:48.757 03:14:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:49.017 [2024-11-20 03:14:38.396254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:49.956 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:49.956 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.956 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.956 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.957 "name": "raid_bdev1", 00:07:49.957 "uuid": "e617ccbf-5e88-4500-b79f-5a4d93f001ab", 00:07:49.957 "strip_size_kb": 0, 00:07:49.957 "state": "online", 00:07:49.957 "raid_level": "raid1", 00:07:49.957 "superblock": true, 00:07:49.957 "num_base_bdevs": 2, 00:07:49.957 "num_base_bdevs_discovered": 2, 00:07:49.957 "num_base_bdevs_operational": 2, 00:07:49.957 "base_bdevs_list": [ 00:07:49.957 { 00:07:49.957 "name": "BaseBdev1", 00:07:49.957 "uuid": "0ad28231-221b-56ee-a1a0-c270af99a441", 00:07:49.957 "is_configured": true, 00:07:49.957 "data_offset": 2048, 00:07:49.957 "data_size": 63488 00:07:49.957 }, 00:07:49.957 { 00:07:49.957 "name": "BaseBdev2", 00:07:49.957 "uuid": "99a69e43-73f5-55d3-8b68-4f9ceb88f2fb", 00:07:49.957 "is_configured": true, 00:07:49.957 "data_offset": 2048, 00:07:49.957 "data_size": 63488 00:07:49.957 } 00:07:49.957 ] 00:07:49.957 }' 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.957 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.217 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.217 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.217 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.217 [2024-11-20 03:14:39.798908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.217 [2024-11-20 03:14:39.799040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.217 [2024-11-20 03:14:39.801745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.217 [2024-11-20 03:14:39.801836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.217 [2024-11-20 03:14:39.801954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.217 [2024-11-20 03:14:39.802013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:50.217 { 00:07:50.217 "results": [ 00:07:50.217 { 00:07:50.217 "job": "raid_bdev1", 00:07:50.217 "core_mask": "0x1", 00:07:50.217 "workload": "randrw", 00:07:50.217 "percentage": 50, 00:07:50.217 "status": "finished", 00:07:50.217 "queue_depth": 1, 00:07:50.217 "io_size": 131072, 00:07:50.217 "runtime": 1.403457, 00:07:50.217 "iops": 17953.52476064461, 00:07:50.217 "mibps": 2244.1905950805763, 00:07:50.217 "io_failed": 0, 00:07:50.217 "io_timeout": 0, 00:07:50.217 "avg_latency_us": 53.0997379080791, 00:07:50.217 "min_latency_us": 23.14061135371179, 00:07:50.217 "max_latency_us": 1616.9362445414847 00:07:50.218 } 00:07:50.218 ], 00:07:50.218 "core_count": 1 00:07:50.218 } 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63400 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63400 ']' 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63400 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.218 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63400 00:07:50.478 killing process with pid 63400 00:07:50.478 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.478 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.478 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63400' 00:07:50.478 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63400 00:07:50.478 [2024-11-20 03:14:39.851828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.478 03:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63400 00:07:50.478 [2024-11-20 03:14:39.988265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0eZ1qOajQB 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:51.860 00:07:51.860 real 0m4.331s 00:07:51.860 user 0m5.192s 00:07:51.860 sys 0m0.536s 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.860 ************************************ 00:07:51.860 END TEST raid_read_error_test 00:07:51.860 ************************************ 00:07:51.860 03:14:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.860 03:14:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:51.860 03:14:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:51.860 03:14:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.860 03:14:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.860 ************************************ 00:07:51.860 START TEST raid_write_error_test 00:07:51.860 ************************************ 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hCxRgaBfIM 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63540 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63540 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63540 ']' 00:07:51.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.860 03:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.860 [2024-11-20 03:14:41.314669] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:51.860 [2024-11-20 03:14:41.314864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63540 ] 00:07:51.860 [2024-11-20 03:14:41.488066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.120 [2024-11-20 03:14:41.603256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.379 [2024-11-20 03:14:41.798221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.379 [2024-11-20 03:14:41.798265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 BaseBdev1_malloc 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 true 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 [2024-11-20 03:14:42.205747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.639 [2024-11-20 03:14:42.205807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.639 [2024-11-20 03:14:42.205828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:52.639 [2024-11-20 03:14:42.205838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.639 [2024-11-20 03:14:42.208075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.639 [2024-11-20 03:14:42.208185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.639 BaseBdev1 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 BaseBdev2_malloc 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 true 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.639 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.900 [2024-11-20 03:14:42.272991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.900 [2024-11-20 03:14:42.273090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.900 [2024-11-20 03:14:42.273126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:52.900 [2024-11-20 03:14:42.273137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.900 [2024-11-20 03:14:42.275385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.900 [2024-11-20 03:14:42.275425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.900 BaseBdev2 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.900 [2024-11-20 03:14:42.285021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.900 [2024-11-20 03:14:42.286820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.900 [2024-11-20 03:14:42.287020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.900 [2024-11-20 03:14:42.287036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.900 [2024-11-20 03:14:42.287267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:52.900 [2024-11-20 03:14:42.287450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.900 [2024-11-20 03:14:42.287460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:52.900 [2024-11-20 03:14:42.287610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.900 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.900 "name": "raid_bdev1", 00:07:52.900 "uuid": "268877ed-d6e3-4b4d-98b2-91654e2668a9", 00:07:52.900 "strip_size_kb": 0, 00:07:52.900 "state": "online", 00:07:52.900 "raid_level": "raid1", 00:07:52.900 "superblock": true, 00:07:52.900 "num_base_bdevs": 2, 00:07:52.900 "num_base_bdevs_discovered": 2, 00:07:52.900 "num_base_bdevs_operational": 2, 00:07:52.900 "base_bdevs_list": [ 00:07:52.900 { 00:07:52.900 "name": "BaseBdev1", 00:07:52.901 "uuid": "a49ac84a-89a6-52be-8b05-f2106480d682", 00:07:52.901 "is_configured": true, 00:07:52.901 "data_offset": 2048, 00:07:52.901 "data_size": 63488 00:07:52.901 }, 00:07:52.901 { 00:07:52.901 "name": "BaseBdev2", 00:07:52.901 "uuid": "74f213a1-3f11-5f28-95de-b89fe18b4643", 00:07:52.901 "is_configured": true, 00:07:52.901 "data_offset": 2048, 00:07:52.901 "data_size": 63488 00:07:52.901 } 00:07:52.901 ] 00:07:52.901 }' 00:07:52.901 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.901 03:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.160 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.160 03:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.420 [2024-11-20 03:14:42.837192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.360 [2024-11-20 03:14:43.745441] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:54.360 [2024-11-20 03:14:43.745590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.360 [2024-11-20 03:14:43.745847] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:54.360 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.361 "name": "raid_bdev1", 00:07:54.361 "uuid": "268877ed-d6e3-4b4d-98b2-91654e2668a9", 00:07:54.361 "strip_size_kb": 0, 00:07:54.361 "state": "online", 00:07:54.361 "raid_level": "raid1", 00:07:54.361 "superblock": true, 00:07:54.361 "num_base_bdevs": 2, 00:07:54.361 "num_base_bdevs_discovered": 1, 00:07:54.361 "num_base_bdevs_operational": 1, 00:07:54.361 "base_bdevs_list": [ 00:07:54.361 { 00:07:54.361 "name": null, 00:07:54.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.361 "is_configured": false, 00:07:54.361 "data_offset": 0, 00:07:54.361 "data_size": 63488 00:07:54.361 }, 00:07:54.361 { 00:07:54.361 "name": "BaseBdev2", 00:07:54.361 "uuid": "74f213a1-3f11-5f28-95de-b89fe18b4643", 00:07:54.361 "is_configured": true, 00:07:54.361 "data_offset": 2048, 00:07:54.361 "data_size": 63488 00:07:54.361 } 00:07:54.361 ] 00:07:54.361 }' 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.361 03:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.620 03:14:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.620 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.620 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.620 [2024-11-20 03:14:44.218481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.621 [2024-11-20 03:14:44.218514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.621 [2024-11-20 03:14:44.221198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.621 [2024-11-20 03:14:44.221271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.621 [2024-11-20 03:14:44.221348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.621 [2024-11-20 03:14:44.221395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:54.621 { 00:07:54.621 "results": [ 00:07:54.621 { 00:07:54.621 "job": "raid_bdev1", 00:07:54.621 "core_mask": "0x1", 00:07:54.621 "workload": "randrw", 00:07:54.621 "percentage": 50, 00:07:54.621 "status": "finished", 00:07:54.621 "queue_depth": 1, 00:07:54.621 "io_size": 131072, 00:07:54.621 "runtime": 1.38208, 00:07:54.621 "iops": 21193.418615420236, 00:07:54.621 "mibps": 2649.1773269275295, 00:07:54.621 "io_failed": 0, 00:07:54.621 "io_timeout": 0, 00:07:54.621 "avg_latency_us": 44.623365568719485, 00:07:54.621 "min_latency_us": 22.805240174672488, 00:07:54.621 "max_latency_us": 1387.989519650655 00:07:54.621 } 00:07:54.621 ], 00:07:54.621 "core_count": 1 00:07:54.621 } 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63540 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63540 ']' 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63540 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.621 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63540 00:07:54.881 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.881 killing process with pid 63540 00:07:54.881 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.881 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63540' 00:07:54.881 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63540 00:07:54.881 [2024-11-20 03:14:44.267686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.881 03:14:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63540 00:07:54.881 [2024-11-20 03:14:44.401018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hCxRgaBfIM 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.270 03:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:56.270 00:07:56.270 real 0m4.352s 00:07:56.270 user 0m5.273s 00:07:56.270 sys 0m0.528s 00:07:56.270 ************************************ 00:07:56.270 END TEST raid_write_error_test 00:07:56.270 ************************************ 00:07:56.271 03:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.271 03:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.271 03:14:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:56.271 03:14:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:56.271 03:14:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:56.271 03:14:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.271 03:14:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.271 03:14:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.271 ************************************ 00:07:56.271 START TEST raid_state_function_test 00:07:56.271 ************************************ 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63678 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.271 Process raid pid: 63678 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63678' 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63678 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63678 ']' 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.271 03:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.271 [2024-11-20 03:14:45.723924] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:56.271 [2024-11-20 03:14:45.724115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.271 [2024-11-20 03:14:45.896507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.543 [2024-11-20 03:14:46.009592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.803 [2024-11-20 03:14:46.214773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.803 [2024-11-20 03:14:46.214816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.064 [2024-11-20 03:14:46.567685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.064 [2024-11-20 03:14:46.567743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.064 [2024-11-20 03:14:46.567754] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.064 [2024-11-20 03:14:46.567764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.064 [2024-11-20 03:14:46.567770] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.064 [2024-11-20 03:14:46.567780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.064 "name": "Existed_Raid", 00:07:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.064 "strip_size_kb": 64, 00:07:57.064 "state": "configuring", 00:07:57.064 "raid_level": "raid0", 00:07:57.064 "superblock": false, 00:07:57.064 "num_base_bdevs": 3, 00:07:57.064 "num_base_bdevs_discovered": 0, 00:07:57.064 "num_base_bdevs_operational": 3, 00:07:57.064 "base_bdevs_list": [ 00:07:57.064 { 00:07:57.064 "name": "BaseBdev1", 00:07:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.064 "is_configured": false, 00:07:57.064 "data_offset": 0, 00:07:57.064 "data_size": 0 00:07:57.064 }, 00:07:57.064 { 00:07:57.064 "name": "BaseBdev2", 00:07:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.064 "is_configured": false, 00:07:57.064 "data_offset": 0, 00:07:57.064 "data_size": 0 00:07:57.064 }, 00:07:57.064 { 00:07:57.064 "name": "BaseBdev3", 00:07:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.064 "is_configured": false, 00:07:57.064 "data_offset": 0, 00:07:57.064 "data_size": 0 00:07:57.064 } 00:07:57.064 ] 00:07:57.064 }' 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.064 03:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 [2024-11-20 03:14:47.014840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.636 [2024-11-20 03:14:47.014940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 [2024-11-20 03:14:47.026811] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.636 [2024-11-20 03:14:47.026899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.636 [2024-11-20 03:14:47.026932] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.636 [2024-11-20 03:14:47.026972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.636 [2024-11-20 03:14:47.026998] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.636 [2024-11-20 03:14:47.027032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 [2024-11-20 03:14:47.074859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.636 BaseBdev1 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 [ 00:07:57.636 { 00:07:57.636 "name": "BaseBdev1", 00:07:57.636 "aliases": [ 00:07:57.636 "df0c0435-7ef4-4776-880e-71ef65f4d651" 00:07:57.636 ], 00:07:57.636 "product_name": "Malloc disk", 00:07:57.636 "block_size": 512, 00:07:57.636 "num_blocks": 65536, 00:07:57.636 "uuid": "df0c0435-7ef4-4776-880e-71ef65f4d651", 00:07:57.636 "assigned_rate_limits": { 00:07:57.636 "rw_ios_per_sec": 0, 00:07:57.636 "rw_mbytes_per_sec": 0, 00:07:57.636 "r_mbytes_per_sec": 0, 00:07:57.636 "w_mbytes_per_sec": 0 00:07:57.636 }, 00:07:57.636 "claimed": true, 00:07:57.636 "claim_type": "exclusive_write", 00:07:57.636 "zoned": false, 00:07:57.636 "supported_io_types": { 00:07:57.636 "read": true, 00:07:57.636 "write": true, 00:07:57.636 "unmap": true, 00:07:57.636 "flush": true, 00:07:57.636 "reset": true, 00:07:57.636 "nvme_admin": false, 00:07:57.636 "nvme_io": false, 00:07:57.636 "nvme_io_md": false, 00:07:57.636 "write_zeroes": true, 00:07:57.636 "zcopy": true, 00:07:57.636 "get_zone_info": false, 00:07:57.636 "zone_management": false, 00:07:57.636 "zone_append": false, 00:07:57.636 "compare": false, 00:07:57.636 "compare_and_write": false, 00:07:57.636 "abort": true, 00:07:57.636 "seek_hole": false, 00:07:57.636 "seek_data": false, 00:07:57.636 "copy": true, 00:07:57.636 "nvme_iov_md": false 00:07:57.636 }, 00:07:57.636 "memory_domains": [ 00:07:57.636 { 00:07:57.636 "dma_device_id": "system", 00:07:57.636 "dma_device_type": 1 00:07:57.636 }, 00:07:57.636 { 00:07:57.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.636 "dma_device_type": 2 00:07:57.636 } 00:07:57.636 ], 00:07:57.636 "driver_specific": {} 00:07:57.636 } 00:07:57.636 ] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.636 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.636 "name": "Existed_Raid", 00:07:57.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.636 "strip_size_kb": 64, 00:07:57.636 "state": "configuring", 00:07:57.637 "raid_level": "raid0", 00:07:57.637 "superblock": false, 00:07:57.637 "num_base_bdevs": 3, 00:07:57.637 "num_base_bdevs_discovered": 1, 00:07:57.637 "num_base_bdevs_operational": 3, 00:07:57.637 "base_bdevs_list": [ 00:07:57.637 { 00:07:57.637 "name": "BaseBdev1", 00:07:57.637 "uuid": "df0c0435-7ef4-4776-880e-71ef65f4d651", 00:07:57.637 "is_configured": true, 00:07:57.637 "data_offset": 0, 00:07:57.637 "data_size": 65536 00:07:57.637 }, 00:07:57.637 { 00:07:57.637 "name": "BaseBdev2", 00:07:57.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.637 "is_configured": false, 00:07:57.637 "data_offset": 0, 00:07:57.637 "data_size": 0 00:07:57.637 }, 00:07:57.637 { 00:07:57.637 "name": "BaseBdev3", 00:07:57.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.637 "is_configured": false, 00:07:57.637 "data_offset": 0, 00:07:57.637 "data_size": 0 00:07:57.637 } 00:07:57.637 ] 00:07:57.637 }' 00:07:57.637 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.637 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.207 [2024-11-20 03:14:47.550176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.207 [2024-11-20 03:14:47.550229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.207 [2024-11-20 03:14:47.558207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.207 [2024-11-20 03:14:47.560009] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.207 [2024-11-20 03:14:47.560115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.207 [2024-11-20 03:14:47.560129] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:58.207 [2024-11-20 03:14:47.560155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.207 "name": "Existed_Raid", 00:07:58.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.207 "strip_size_kb": 64, 00:07:58.207 "state": "configuring", 00:07:58.207 "raid_level": "raid0", 00:07:58.207 "superblock": false, 00:07:58.207 "num_base_bdevs": 3, 00:07:58.207 "num_base_bdevs_discovered": 1, 00:07:58.207 "num_base_bdevs_operational": 3, 00:07:58.207 "base_bdevs_list": [ 00:07:58.207 { 00:07:58.207 "name": "BaseBdev1", 00:07:58.207 "uuid": "df0c0435-7ef4-4776-880e-71ef65f4d651", 00:07:58.207 "is_configured": true, 00:07:58.207 "data_offset": 0, 00:07:58.207 "data_size": 65536 00:07:58.207 }, 00:07:58.207 { 00:07:58.207 "name": "BaseBdev2", 00:07:58.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.207 "is_configured": false, 00:07:58.207 "data_offset": 0, 00:07:58.207 "data_size": 0 00:07:58.207 }, 00:07:58.207 { 00:07:58.207 "name": "BaseBdev3", 00:07:58.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.207 "is_configured": false, 00:07:58.207 "data_offset": 0, 00:07:58.207 "data_size": 0 00:07:58.207 } 00:07:58.207 ] 00:07:58.207 }' 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.207 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.468 03:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.468 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.468 03:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.468 [2024-11-20 03:14:48.019818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.468 BaseBdev2 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.468 [ 00:07:58.468 { 00:07:58.468 "name": "BaseBdev2", 00:07:58.468 "aliases": [ 00:07:58.468 "9065e311-c628-4b0b-8013-45b190f79a27" 00:07:58.468 ], 00:07:58.468 "product_name": "Malloc disk", 00:07:58.468 "block_size": 512, 00:07:58.468 "num_blocks": 65536, 00:07:58.468 "uuid": "9065e311-c628-4b0b-8013-45b190f79a27", 00:07:58.468 "assigned_rate_limits": { 00:07:58.468 "rw_ios_per_sec": 0, 00:07:58.468 "rw_mbytes_per_sec": 0, 00:07:58.468 "r_mbytes_per_sec": 0, 00:07:58.468 "w_mbytes_per_sec": 0 00:07:58.468 }, 00:07:58.468 "claimed": true, 00:07:58.468 "claim_type": "exclusive_write", 00:07:58.468 "zoned": false, 00:07:58.468 "supported_io_types": { 00:07:58.468 "read": true, 00:07:58.468 "write": true, 00:07:58.468 "unmap": true, 00:07:58.468 "flush": true, 00:07:58.468 "reset": true, 00:07:58.468 "nvme_admin": false, 00:07:58.468 "nvme_io": false, 00:07:58.468 "nvme_io_md": false, 00:07:58.468 "write_zeroes": true, 00:07:58.468 "zcopy": true, 00:07:58.468 "get_zone_info": false, 00:07:58.468 "zone_management": false, 00:07:58.468 "zone_append": false, 00:07:58.468 "compare": false, 00:07:58.468 "compare_and_write": false, 00:07:58.468 "abort": true, 00:07:58.468 "seek_hole": false, 00:07:58.468 "seek_data": false, 00:07:58.468 "copy": true, 00:07:58.468 "nvme_iov_md": false 00:07:58.468 }, 00:07:58.468 "memory_domains": [ 00:07:58.468 { 00:07:58.468 "dma_device_id": "system", 00:07:58.468 "dma_device_type": 1 00:07:58.468 }, 00:07:58.468 { 00:07:58.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.468 "dma_device_type": 2 00:07:58.468 } 00:07:58.468 ], 00:07:58.468 "driver_specific": {} 00:07:58.468 } 00:07:58.468 ] 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.468 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.728 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.728 "name": "Existed_Raid", 00:07:58.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.728 "strip_size_kb": 64, 00:07:58.728 "state": "configuring", 00:07:58.728 "raid_level": "raid0", 00:07:58.728 "superblock": false, 00:07:58.728 "num_base_bdevs": 3, 00:07:58.728 "num_base_bdevs_discovered": 2, 00:07:58.728 "num_base_bdevs_operational": 3, 00:07:58.728 "base_bdevs_list": [ 00:07:58.728 { 00:07:58.728 "name": "BaseBdev1", 00:07:58.728 "uuid": "df0c0435-7ef4-4776-880e-71ef65f4d651", 00:07:58.728 "is_configured": true, 00:07:58.728 "data_offset": 0, 00:07:58.728 "data_size": 65536 00:07:58.728 }, 00:07:58.728 { 00:07:58.728 "name": "BaseBdev2", 00:07:58.728 "uuid": "9065e311-c628-4b0b-8013-45b190f79a27", 00:07:58.728 "is_configured": true, 00:07:58.728 "data_offset": 0, 00:07:58.728 "data_size": 65536 00:07:58.728 }, 00:07:58.728 { 00:07:58.728 "name": "BaseBdev3", 00:07:58.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.728 "is_configured": false, 00:07:58.728 "data_offset": 0, 00:07:58.728 "data_size": 0 00:07:58.728 } 00:07:58.728 ] 00:07:58.728 }' 00:07:58.729 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.729 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.989 [2024-11-20 03:14:48.557086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.989 [2024-11-20 03:14:48.557129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.989 [2024-11-20 03:14:48.557141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:58.989 [2024-11-20 03:14:48.557397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:58.989 [2024-11-20 03:14:48.557550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.989 [2024-11-20 03:14:48.557559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.989 [2024-11-20 03:14:48.557847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.989 BaseBdev3 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.989 [ 00:07:58.989 { 00:07:58.989 "name": "BaseBdev3", 00:07:58.989 "aliases": [ 00:07:58.989 "10c16922-4444-4403-bcc9-895f3c038259" 00:07:58.989 ], 00:07:58.989 "product_name": "Malloc disk", 00:07:58.989 "block_size": 512, 00:07:58.989 "num_blocks": 65536, 00:07:58.989 "uuid": "10c16922-4444-4403-bcc9-895f3c038259", 00:07:58.989 "assigned_rate_limits": { 00:07:58.989 "rw_ios_per_sec": 0, 00:07:58.989 "rw_mbytes_per_sec": 0, 00:07:58.989 "r_mbytes_per_sec": 0, 00:07:58.989 "w_mbytes_per_sec": 0 00:07:58.989 }, 00:07:58.989 "claimed": true, 00:07:58.989 "claim_type": "exclusive_write", 00:07:58.989 "zoned": false, 00:07:58.989 "supported_io_types": { 00:07:58.989 "read": true, 00:07:58.989 "write": true, 00:07:58.989 "unmap": true, 00:07:58.989 "flush": true, 00:07:58.989 "reset": true, 00:07:58.989 "nvme_admin": false, 00:07:58.989 "nvme_io": false, 00:07:58.989 "nvme_io_md": false, 00:07:58.989 "write_zeroes": true, 00:07:58.989 "zcopy": true, 00:07:58.989 "get_zone_info": false, 00:07:58.989 "zone_management": false, 00:07:58.989 "zone_append": false, 00:07:58.989 "compare": false, 00:07:58.989 "compare_and_write": false, 00:07:58.989 "abort": true, 00:07:58.989 "seek_hole": false, 00:07:58.989 "seek_data": false, 00:07:58.989 "copy": true, 00:07:58.989 "nvme_iov_md": false 00:07:58.989 }, 00:07:58.989 "memory_domains": [ 00:07:58.989 { 00:07:58.989 "dma_device_id": "system", 00:07:58.989 "dma_device_type": 1 00:07:58.989 }, 00:07:58.989 { 00:07:58.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.989 "dma_device_type": 2 00:07:58.989 } 00:07:58.989 ], 00:07:58.989 "driver_specific": {} 00:07:58.989 } 00:07:58.989 ] 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.989 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.990 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.250 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.250 "name": "Existed_Raid", 00:07:59.250 "uuid": "0043dc2b-42cc-4bdf-a86d-e8a4d13856c4", 00:07:59.250 "strip_size_kb": 64, 00:07:59.250 "state": "online", 00:07:59.250 "raid_level": "raid0", 00:07:59.250 "superblock": false, 00:07:59.250 "num_base_bdevs": 3, 00:07:59.250 "num_base_bdevs_discovered": 3, 00:07:59.250 "num_base_bdevs_operational": 3, 00:07:59.250 "base_bdevs_list": [ 00:07:59.250 { 00:07:59.250 "name": "BaseBdev1", 00:07:59.250 "uuid": "df0c0435-7ef4-4776-880e-71ef65f4d651", 00:07:59.250 "is_configured": true, 00:07:59.250 "data_offset": 0, 00:07:59.250 "data_size": 65536 00:07:59.250 }, 00:07:59.250 { 00:07:59.250 "name": "BaseBdev2", 00:07:59.250 "uuid": "9065e311-c628-4b0b-8013-45b190f79a27", 00:07:59.250 "is_configured": true, 00:07:59.250 "data_offset": 0, 00:07:59.250 "data_size": 65536 00:07:59.250 }, 00:07:59.250 { 00:07:59.250 "name": "BaseBdev3", 00:07:59.250 "uuid": "10c16922-4444-4403-bcc9-895f3c038259", 00:07:59.250 "is_configured": true, 00:07:59.250 "data_offset": 0, 00:07:59.250 "data_size": 65536 00:07:59.250 } 00:07:59.250 ] 00:07:59.250 }' 00:07:59.250 03:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.250 03:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.511 [2024-11-20 03:14:49.068585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.511 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.511 "name": "Existed_Raid", 00:07:59.511 "aliases": [ 00:07:59.511 "0043dc2b-42cc-4bdf-a86d-e8a4d13856c4" 00:07:59.511 ], 00:07:59.511 "product_name": "Raid Volume", 00:07:59.511 "block_size": 512, 00:07:59.511 "num_blocks": 196608, 00:07:59.511 "uuid": "0043dc2b-42cc-4bdf-a86d-e8a4d13856c4", 00:07:59.511 "assigned_rate_limits": { 00:07:59.511 "rw_ios_per_sec": 0, 00:07:59.511 "rw_mbytes_per_sec": 0, 00:07:59.511 "r_mbytes_per_sec": 0, 00:07:59.511 "w_mbytes_per_sec": 0 00:07:59.511 }, 00:07:59.511 "claimed": false, 00:07:59.511 "zoned": false, 00:07:59.511 "supported_io_types": { 00:07:59.511 "read": true, 00:07:59.511 "write": true, 00:07:59.511 "unmap": true, 00:07:59.511 "flush": true, 00:07:59.511 "reset": true, 00:07:59.511 "nvme_admin": false, 00:07:59.511 "nvme_io": false, 00:07:59.511 "nvme_io_md": false, 00:07:59.511 "write_zeroes": true, 00:07:59.511 "zcopy": false, 00:07:59.511 "get_zone_info": false, 00:07:59.511 "zone_management": false, 00:07:59.511 "zone_append": false, 00:07:59.511 "compare": false, 00:07:59.511 "compare_and_write": false, 00:07:59.511 "abort": false, 00:07:59.511 "seek_hole": false, 00:07:59.511 "seek_data": false, 00:07:59.511 "copy": false, 00:07:59.511 "nvme_iov_md": false 00:07:59.511 }, 00:07:59.511 "memory_domains": [ 00:07:59.511 { 00:07:59.511 "dma_device_id": "system", 00:07:59.511 "dma_device_type": 1 00:07:59.511 }, 00:07:59.511 { 00:07:59.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.511 "dma_device_type": 2 00:07:59.511 }, 00:07:59.511 { 00:07:59.511 "dma_device_id": "system", 00:07:59.511 "dma_device_type": 1 00:07:59.511 }, 00:07:59.511 { 00:07:59.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.511 "dma_device_type": 2 00:07:59.511 }, 00:07:59.511 { 00:07:59.511 "dma_device_id": "system", 00:07:59.511 "dma_device_type": 1 00:07:59.511 }, 00:07:59.511 { 00:07:59.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.512 "dma_device_type": 2 00:07:59.512 } 00:07:59.512 ], 00:07:59.512 "driver_specific": { 00:07:59.512 "raid": { 00:07:59.512 "uuid": "0043dc2b-42cc-4bdf-a86d-e8a4d13856c4", 00:07:59.512 "strip_size_kb": 64, 00:07:59.512 "state": "online", 00:07:59.512 "raid_level": "raid0", 00:07:59.512 "superblock": false, 00:07:59.512 "num_base_bdevs": 3, 00:07:59.512 "num_base_bdevs_discovered": 3, 00:07:59.512 "num_base_bdevs_operational": 3, 00:07:59.512 "base_bdevs_list": [ 00:07:59.512 { 00:07:59.512 "name": "BaseBdev1", 00:07:59.512 "uuid": "df0c0435-7ef4-4776-880e-71ef65f4d651", 00:07:59.512 "is_configured": true, 00:07:59.512 "data_offset": 0, 00:07:59.512 "data_size": 65536 00:07:59.512 }, 00:07:59.512 { 00:07:59.512 "name": "BaseBdev2", 00:07:59.512 "uuid": "9065e311-c628-4b0b-8013-45b190f79a27", 00:07:59.512 "is_configured": true, 00:07:59.512 "data_offset": 0, 00:07:59.512 "data_size": 65536 00:07:59.512 }, 00:07:59.512 { 00:07:59.512 "name": "BaseBdev3", 00:07:59.512 "uuid": "10c16922-4444-4403-bcc9-895f3c038259", 00:07:59.512 "is_configured": true, 00:07:59.512 "data_offset": 0, 00:07:59.512 "data_size": 65536 00:07:59.512 } 00:07:59.512 ] 00:07:59.512 } 00:07:59.512 } 00:07:59.512 }' 00:07:59.512 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.777 BaseBdev2 00:07:59.777 BaseBdev3' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.777 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.777 [2024-11-20 03:14:49.363790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.777 [2024-11-20 03:14:49.363829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.777 [2024-11-20 03:14:49.363878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.037 "name": "Existed_Raid", 00:08:00.037 "uuid": "0043dc2b-42cc-4bdf-a86d-e8a4d13856c4", 00:08:00.037 "strip_size_kb": 64, 00:08:00.037 "state": "offline", 00:08:00.037 "raid_level": "raid0", 00:08:00.037 "superblock": false, 00:08:00.037 "num_base_bdevs": 3, 00:08:00.037 "num_base_bdevs_discovered": 2, 00:08:00.037 "num_base_bdevs_operational": 2, 00:08:00.037 "base_bdevs_list": [ 00:08:00.037 { 00:08:00.037 "name": null, 00:08:00.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.037 "is_configured": false, 00:08:00.037 "data_offset": 0, 00:08:00.037 "data_size": 65536 00:08:00.037 }, 00:08:00.037 { 00:08:00.037 "name": "BaseBdev2", 00:08:00.037 "uuid": "9065e311-c628-4b0b-8013-45b190f79a27", 00:08:00.037 "is_configured": true, 00:08:00.037 "data_offset": 0, 00:08:00.037 "data_size": 65536 00:08:00.037 }, 00:08:00.037 { 00:08:00.037 "name": "BaseBdev3", 00:08:00.037 "uuid": "10c16922-4444-4403-bcc9-895f3c038259", 00:08:00.037 "is_configured": true, 00:08:00.037 "data_offset": 0, 00:08:00.037 "data_size": 65536 00:08:00.037 } 00:08:00.037 ] 00:08:00.037 }' 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.037 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.297 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.297 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.297 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.297 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.297 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.297 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.557 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.557 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.557 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.557 03:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.557 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.557 03:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.557 [2024-11-20 03:14:49.965213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.557 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.557 [2024-11-20 03:14:50.119501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:00.557 [2024-11-20 03:14:50.119553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.818 BaseBdev2 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.818 [ 00:08:00.818 { 00:08:00.818 "name": "BaseBdev2", 00:08:00.818 "aliases": [ 00:08:00.818 "304877b5-88db-4c5b-9ca8-d132f2fc8f01" 00:08:00.818 ], 00:08:00.818 "product_name": "Malloc disk", 00:08:00.818 "block_size": 512, 00:08:00.818 "num_blocks": 65536, 00:08:00.818 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:00.818 "assigned_rate_limits": { 00:08:00.818 "rw_ios_per_sec": 0, 00:08:00.818 "rw_mbytes_per_sec": 0, 00:08:00.818 "r_mbytes_per_sec": 0, 00:08:00.818 "w_mbytes_per_sec": 0 00:08:00.818 }, 00:08:00.818 "claimed": false, 00:08:00.818 "zoned": false, 00:08:00.818 "supported_io_types": { 00:08:00.818 "read": true, 00:08:00.818 "write": true, 00:08:00.818 "unmap": true, 00:08:00.818 "flush": true, 00:08:00.818 "reset": true, 00:08:00.818 "nvme_admin": false, 00:08:00.818 "nvme_io": false, 00:08:00.818 "nvme_io_md": false, 00:08:00.818 "write_zeroes": true, 00:08:00.818 "zcopy": true, 00:08:00.818 "get_zone_info": false, 00:08:00.818 "zone_management": false, 00:08:00.818 "zone_append": false, 00:08:00.818 "compare": false, 00:08:00.818 "compare_and_write": false, 00:08:00.818 "abort": true, 00:08:00.818 "seek_hole": false, 00:08:00.818 "seek_data": false, 00:08:00.818 "copy": true, 00:08:00.818 "nvme_iov_md": false 00:08:00.818 }, 00:08:00.818 "memory_domains": [ 00:08:00.818 { 00:08:00.818 "dma_device_id": "system", 00:08:00.818 "dma_device_type": 1 00:08:00.818 }, 00:08:00.818 { 00:08:00.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.818 "dma_device_type": 2 00:08:00.818 } 00:08:00.818 ], 00:08:00.818 "driver_specific": {} 00:08:00.818 } 00:08:00.818 ] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.818 BaseBdev3 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.818 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.819 [ 00:08:00.819 { 00:08:00.819 "name": "BaseBdev3", 00:08:00.819 "aliases": [ 00:08:00.819 "3cd2f136-6548-4434-ae27-448a83c9fc58" 00:08:00.819 ], 00:08:00.819 "product_name": "Malloc disk", 00:08:00.819 "block_size": 512, 00:08:00.819 "num_blocks": 65536, 00:08:00.819 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:00.819 "assigned_rate_limits": { 00:08:00.819 "rw_ios_per_sec": 0, 00:08:00.819 "rw_mbytes_per_sec": 0, 00:08:00.819 "r_mbytes_per_sec": 0, 00:08:00.819 "w_mbytes_per_sec": 0 00:08:00.819 }, 00:08:00.819 "claimed": false, 00:08:00.819 "zoned": false, 00:08:00.819 "supported_io_types": { 00:08:00.819 "read": true, 00:08:00.819 "write": true, 00:08:00.819 "unmap": true, 00:08:00.819 "flush": true, 00:08:00.819 "reset": true, 00:08:00.819 "nvme_admin": false, 00:08:00.819 "nvme_io": false, 00:08:00.819 "nvme_io_md": false, 00:08:00.819 "write_zeroes": true, 00:08:00.819 "zcopy": true, 00:08:00.819 "get_zone_info": false, 00:08:00.819 "zone_management": false, 00:08:00.819 "zone_append": false, 00:08:00.819 "compare": false, 00:08:00.819 "compare_and_write": false, 00:08:00.819 "abort": true, 00:08:00.819 "seek_hole": false, 00:08:00.819 "seek_data": false, 00:08:00.819 "copy": true, 00:08:00.819 "nvme_iov_md": false 00:08:00.819 }, 00:08:00.819 "memory_domains": [ 00:08:00.819 { 00:08:00.819 "dma_device_id": "system", 00:08:00.819 "dma_device_type": 1 00:08:00.819 }, 00:08:00.819 { 00:08:00.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.819 "dma_device_type": 2 00:08:00.819 } 00:08:00.819 ], 00:08:00.819 "driver_specific": {} 00:08:00.819 } 00:08:00.819 ] 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.819 [2024-11-20 03:14:50.424824] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.819 [2024-11-20 03:14:50.424912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.819 [2024-11-20 03:14:50.424961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.819 [2024-11-20 03:14:50.426775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.819 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.080 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.080 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.080 "name": "Existed_Raid", 00:08:01.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.080 "strip_size_kb": 64, 00:08:01.080 "state": "configuring", 00:08:01.080 "raid_level": "raid0", 00:08:01.080 "superblock": false, 00:08:01.080 "num_base_bdevs": 3, 00:08:01.080 "num_base_bdevs_discovered": 2, 00:08:01.080 "num_base_bdevs_operational": 3, 00:08:01.080 "base_bdevs_list": [ 00:08:01.080 { 00:08:01.080 "name": "BaseBdev1", 00:08:01.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.080 "is_configured": false, 00:08:01.080 "data_offset": 0, 00:08:01.080 "data_size": 0 00:08:01.080 }, 00:08:01.080 { 00:08:01.080 "name": "BaseBdev2", 00:08:01.080 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:01.080 "is_configured": true, 00:08:01.080 "data_offset": 0, 00:08:01.080 "data_size": 65536 00:08:01.080 }, 00:08:01.080 { 00:08:01.080 "name": "BaseBdev3", 00:08:01.080 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:01.080 "is_configured": true, 00:08:01.080 "data_offset": 0, 00:08:01.080 "data_size": 65536 00:08:01.080 } 00:08:01.080 ] 00:08:01.080 }' 00:08:01.080 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.080 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.340 [2024-11-20 03:14:50.788202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.340 "name": "Existed_Raid", 00:08:01.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.340 "strip_size_kb": 64, 00:08:01.340 "state": "configuring", 00:08:01.340 "raid_level": "raid0", 00:08:01.340 "superblock": false, 00:08:01.340 "num_base_bdevs": 3, 00:08:01.340 "num_base_bdevs_discovered": 1, 00:08:01.340 "num_base_bdevs_operational": 3, 00:08:01.340 "base_bdevs_list": [ 00:08:01.340 { 00:08:01.340 "name": "BaseBdev1", 00:08:01.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.340 "is_configured": false, 00:08:01.340 "data_offset": 0, 00:08:01.340 "data_size": 0 00:08:01.340 }, 00:08:01.340 { 00:08:01.340 "name": null, 00:08:01.340 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:01.340 "is_configured": false, 00:08:01.340 "data_offset": 0, 00:08:01.340 "data_size": 65536 00:08:01.340 }, 00:08:01.340 { 00:08:01.340 "name": "BaseBdev3", 00:08:01.340 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:01.340 "is_configured": true, 00:08:01.340 "data_offset": 0, 00:08:01.340 "data_size": 65536 00:08:01.340 } 00:08:01.340 ] 00:08:01.340 }' 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.340 03:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.910 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.911 [2024-11-20 03:14:51.331830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.911 BaseBdev1 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.911 [ 00:08:01.911 { 00:08:01.911 "name": "BaseBdev1", 00:08:01.911 "aliases": [ 00:08:01.911 "e6bec393-420c-4ff0-b981-72ccfd2078ac" 00:08:01.911 ], 00:08:01.911 "product_name": "Malloc disk", 00:08:01.911 "block_size": 512, 00:08:01.911 "num_blocks": 65536, 00:08:01.911 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:01.911 "assigned_rate_limits": { 00:08:01.911 "rw_ios_per_sec": 0, 00:08:01.911 "rw_mbytes_per_sec": 0, 00:08:01.911 "r_mbytes_per_sec": 0, 00:08:01.911 "w_mbytes_per_sec": 0 00:08:01.911 }, 00:08:01.911 "claimed": true, 00:08:01.911 "claim_type": "exclusive_write", 00:08:01.911 "zoned": false, 00:08:01.911 "supported_io_types": { 00:08:01.911 "read": true, 00:08:01.911 "write": true, 00:08:01.911 "unmap": true, 00:08:01.911 "flush": true, 00:08:01.911 "reset": true, 00:08:01.911 "nvme_admin": false, 00:08:01.911 "nvme_io": false, 00:08:01.911 "nvme_io_md": false, 00:08:01.911 "write_zeroes": true, 00:08:01.911 "zcopy": true, 00:08:01.911 "get_zone_info": false, 00:08:01.911 "zone_management": false, 00:08:01.911 "zone_append": false, 00:08:01.911 "compare": false, 00:08:01.911 "compare_and_write": false, 00:08:01.911 "abort": true, 00:08:01.911 "seek_hole": false, 00:08:01.911 "seek_data": false, 00:08:01.911 "copy": true, 00:08:01.911 "nvme_iov_md": false 00:08:01.911 }, 00:08:01.911 "memory_domains": [ 00:08:01.911 { 00:08:01.911 "dma_device_id": "system", 00:08:01.911 "dma_device_type": 1 00:08:01.911 }, 00:08:01.911 { 00:08:01.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.911 "dma_device_type": 2 00:08:01.911 } 00:08:01.911 ], 00:08:01.911 "driver_specific": {} 00:08:01.911 } 00:08:01.911 ] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.911 "name": "Existed_Raid", 00:08:01.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.911 "strip_size_kb": 64, 00:08:01.911 "state": "configuring", 00:08:01.911 "raid_level": "raid0", 00:08:01.911 "superblock": false, 00:08:01.911 "num_base_bdevs": 3, 00:08:01.911 "num_base_bdevs_discovered": 2, 00:08:01.911 "num_base_bdevs_operational": 3, 00:08:01.911 "base_bdevs_list": [ 00:08:01.911 { 00:08:01.911 "name": "BaseBdev1", 00:08:01.911 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:01.911 "is_configured": true, 00:08:01.911 "data_offset": 0, 00:08:01.911 "data_size": 65536 00:08:01.911 }, 00:08:01.911 { 00:08:01.911 "name": null, 00:08:01.911 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:01.911 "is_configured": false, 00:08:01.911 "data_offset": 0, 00:08:01.911 "data_size": 65536 00:08:01.911 }, 00:08:01.911 { 00:08:01.911 "name": "BaseBdev3", 00:08:01.911 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:01.911 "is_configured": true, 00:08:01.911 "data_offset": 0, 00:08:01.911 "data_size": 65536 00:08:01.911 } 00:08:01.911 ] 00:08:01.911 }' 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.911 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.481 [2024-11-20 03:14:51.858976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.481 "name": "Existed_Raid", 00:08:02.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.481 "strip_size_kb": 64, 00:08:02.481 "state": "configuring", 00:08:02.481 "raid_level": "raid0", 00:08:02.481 "superblock": false, 00:08:02.481 "num_base_bdevs": 3, 00:08:02.481 "num_base_bdevs_discovered": 1, 00:08:02.481 "num_base_bdevs_operational": 3, 00:08:02.481 "base_bdevs_list": [ 00:08:02.481 { 00:08:02.481 "name": "BaseBdev1", 00:08:02.481 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:02.481 "is_configured": true, 00:08:02.481 "data_offset": 0, 00:08:02.481 "data_size": 65536 00:08:02.481 }, 00:08:02.481 { 00:08:02.481 "name": null, 00:08:02.481 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:02.481 "is_configured": false, 00:08:02.481 "data_offset": 0, 00:08:02.481 "data_size": 65536 00:08:02.481 }, 00:08:02.481 { 00:08:02.481 "name": null, 00:08:02.481 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:02.481 "is_configured": false, 00:08:02.481 "data_offset": 0, 00:08:02.481 "data_size": 65536 00:08:02.481 } 00:08:02.481 ] 00:08:02.481 }' 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.481 03:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.741 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:02.741 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.741 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.741 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.741 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.000 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 [2024-11-20 03:14:52.390135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.001 "name": "Existed_Raid", 00:08:03.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.001 "strip_size_kb": 64, 00:08:03.001 "state": "configuring", 00:08:03.001 "raid_level": "raid0", 00:08:03.001 "superblock": false, 00:08:03.001 "num_base_bdevs": 3, 00:08:03.001 "num_base_bdevs_discovered": 2, 00:08:03.001 "num_base_bdevs_operational": 3, 00:08:03.001 "base_bdevs_list": [ 00:08:03.001 { 00:08:03.001 "name": "BaseBdev1", 00:08:03.001 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:03.001 "is_configured": true, 00:08:03.001 "data_offset": 0, 00:08:03.001 "data_size": 65536 00:08:03.001 }, 00:08:03.001 { 00:08:03.001 "name": null, 00:08:03.001 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:03.001 "is_configured": false, 00:08:03.001 "data_offset": 0, 00:08:03.001 "data_size": 65536 00:08:03.001 }, 00:08:03.001 { 00:08:03.001 "name": "BaseBdev3", 00:08:03.001 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:03.001 "is_configured": true, 00:08:03.001 "data_offset": 0, 00:08:03.001 "data_size": 65536 00:08:03.001 } 00:08:03.001 ] 00:08:03.001 }' 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.001 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.261 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.521 [2024-11-20 03:14:52.897292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.521 03:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.521 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.521 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.521 "name": "Existed_Raid", 00:08:03.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.521 "strip_size_kb": 64, 00:08:03.521 "state": "configuring", 00:08:03.521 "raid_level": "raid0", 00:08:03.521 "superblock": false, 00:08:03.521 "num_base_bdevs": 3, 00:08:03.521 "num_base_bdevs_discovered": 1, 00:08:03.521 "num_base_bdevs_operational": 3, 00:08:03.521 "base_bdevs_list": [ 00:08:03.521 { 00:08:03.521 "name": null, 00:08:03.521 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:03.521 "is_configured": false, 00:08:03.521 "data_offset": 0, 00:08:03.521 "data_size": 65536 00:08:03.521 }, 00:08:03.521 { 00:08:03.521 "name": null, 00:08:03.521 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:03.521 "is_configured": false, 00:08:03.521 "data_offset": 0, 00:08:03.521 "data_size": 65536 00:08:03.521 }, 00:08:03.521 { 00:08:03.521 "name": "BaseBdev3", 00:08:03.521 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:03.521 "is_configured": true, 00:08:03.521 "data_offset": 0, 00:08:03.521 "data_size": 65536 00:08:03.521 } 00:08:03.521 ] 00:08:03.521 }' 00:08:03.521 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.521 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.780 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.780 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:03.780 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.780 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.780 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.040 [2024-11-20 03:14:53.434109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.040 "name": "Existed_Raid", 00:08:04.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.040 "strip_size_kb": 64, 00:08:04.040 "state": "configuring", 00:08:04.040 "raid_level": "raid0", 00:08:04.040 "superblock": false, 00:08:04.040 "num_base_bdevs": 3, 00:08:04.040 "num_base_bdevs_discovered": 2, 00:08:04.040 "num_base_bdevs_operational": 3, 00:08:04.040 "base_bdevs_list": [ 00:08:04.040 { 00:08:04.040 "name": null, 00:08:04.040 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:04.040 "is_configured": false, 00:08:04.040 "data_offset": 0, 00:08:04.040 "data_size": 65536 00:08:04.040 }, 00:08:04.040 { 00:08:04.040 "name": "BaseBdev2", 00:08:04.040 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:04.040 "is_configured": true, 00:08:04.040 "data_offset": 0, 00:08:04.040 "data_size": 65536 00:08:04.040 }, 00:08:04.040 { 00:08:04.040 "name": "BaseBdev3", 00:08:04.040 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:04.040 "is_configured": true, 00:08:04.040 "data_offset": 0, 00:08:04.040 "data_size": 65536 00:08:04.040 } 00:08:04.040 ] 00:08:04.040 }' 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.040 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.300 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e6bec393-420c-4ff0-b981-72ccfd2078ac 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.560 [2024-11-20 03:14:53.992639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:04.560 [2024-11-20 03:14:53.992682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:04.560 [2024-11-20 03:14:53.992691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:04.560 [2024-11-20 03:14:53.992935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:04.560 [2024-11-20 03:14:53.993086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:04.560 [2024-11-20 03:14:53.993100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:04.560 [2024-11-20 03:14:53.993353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.560 NewBaseBdev 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.560 03:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.560 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.560 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:04.560 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.560 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.560 [ 00:08:04.560 { 00:08:04.560 "name": "NewBaseBdev", 00:08:04.560 "aliases": [ 00:08:04.560 "e6bec393-420c-4ff0-b981-72ccfd2078ac" 00:08:04.560 ], 00:08:04.560 "product_name": "Malloc disk", 00:08:04.561 "block_size": 512, 00:08:04.561 "num_blocks": 65536, 00:08:04.561 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:04.561 "assigned_rate_limits": { 00:08:04.561 "rw_ios_per_sec": 0, 00:08:04.561 "rw_mbytes_per_sec": 0, 00:08:04.561 "r_mbytes_per_sec": 0, 00:08:04.561 "w_mbytes_per_sec": 0 00:08:04.561 }, 00:08:04.561 "claimed": true, 00:08:04.561 "claim_type": "exclusive_write", 00:08:04.561 "zoned": false, 00:08:04.561 "supported_io_types": { 00:08:04.561 "read": true, 00:08:04.561 "write": true, 00:08:04.561 "unmap": true, 00:08:04.561 "flush": true, 00:08:04.561 "reset": true, 00:08:04.561 "nvme_admin": false, 00:08:04.561 "nvme_io": false, 00:08:04.561 "nvme_io_md": false, 00:08:04.561 "write_zeroes": true, 00:08:04.561 "zcopy": true, 00:08:04.561 "get_zone_info": false, 00:08:04.561 "zone_management": false, 00:08:04.561 "zone_append": false, 00:08:04.561 "compare": false, 00:08:04.561 "compare_and_write": false, 00:08:04.561 "abort": true, 00:08:04.561 "seek_hole": false, 00:08:04.561 "seek_data": false, 00:08:04.561 "copy": true, 00:08:04.561 "nvme_iov_md": false 00:08:04.561 }, 00:08:04.561 "memory_domains": [ 00:08:04.561 { 00:08:04.561 "dma_device_id": "system", 00:08:04.561 "dma_device_type": 1 00:08:04.561 }, 00:08:04.561 { 00:08:04.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.561 "dma_device_type": 2 00:08:04.561 } 00:08:04.561 ], 00:08:04.561 "driver_specific": {} 00:08:04.561 } 00:08:04.561 ] 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.561 "name": "Existed_Raid", 00:08:04.561 "uuid": "307c4c72-1d4c-449b-a319-c7424b24c54f", 00:08:04.561 "strip_size_kb": 64, 00:08:04.561 "state": "online", 00:08:04.561 "raid_level": "raid0", 00:08:04.561 "superblock": false, 00:08:04.561 "num_base_bdevs": 3, 00:08:04.561 "num_base_bdevs_discovered": 3, 00:08:04.561 "num_base_bdevs_operational": 3, 00:08:04.561 "base_bdevs_list": [ 00:08:04.561 { 00:08:04.561 "name": "NewBaseBdev", 00:08:04.561 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:04.561 "is_configured": true, 00:08:04.561 "data_offset": 0, 00:08:04.561 "data_size": 65536 00:08:04.561 }, 00:08:04.561 { 00:08:04.561 "name": "BaseBdev2", 00:08:04.561 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:04.561 "is_configured": true, 00:08:04.561 "data_offset": 0, 00:08:04.561 "data_size": 65536 00:08:04.561 }, 00:08:04.561 { 00:08:04.561 "name": "BaseBdev3", 00:08:04.561 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:04.561 "is_configured": true, 00:08:04.561 "data_offset": 0, 00:08:04.561 "data_size": 65536 00:08:04.561 } 00:08:04.561 ] 00:08:04.561 }' 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.561 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.133 [2024-11-20 03:14:54.476156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.133 "name": "Existed_Raid", 00:08:05.133 "aliases": [ 00:08:05.133 "307c4c72-1d4c-449b-a319-c7424b24c54f" 00:08:05.133 ], 00:08:05.133 "product_name": "Raid Volume", 00:08:05.133 "block_size": 512, 00:08:05.133 "num_blocks": 196608, 00:08:05.133 "uuid": "307c4c72-1d4c-449b-a319-c7424b24c54f", 00:08:05.133 "assigned_rate_limits": { 00:08:05.133 "rw_ios_per_sec": 0, 00:08:05.133 "rw_mbytes_per_sec": 0, 00:08:05.133 "r_mbytes_per_sec": 0, 00:08:05.133 "w_mbytes_per_sec": 0 00:08:05.133 }, 00:08:05.133 "claimed": false, 00:08:05.133 "zoned": false, 00:08:05.133 "supported_io_types": { 00:08:05.133 "read": true, 00:08:05.133 "write": true, 00:08:05.133 "unmap": true, 00:08:05.133 "flush": true, 00:08:05.133 "reset": true, 00:08:05.133 "nvme_admin": false, 00:08:05.133 "nvme_io": false, 00:08:05.133 "nvme_io_md": false, 00:08:05.133 "write_zeroes": true, 00:08:05.133 "zcopy": false, 00:08:05.133 "get_zone_info": false, 00:08:05.133 "zone_management": false, 00:08:05.133 "zone_append": false, 00:08:05.133 "compare": false, 00:08:05.133 "compare_and_write": false, 00:08:05.133 "abort": false, 00:08:05.133 "seek_hole": false, 00:08:05.133 "seek_data": false, 00:08:05.133 "copy": false, 00:08:05.133 "nvme_iov_md": false 00:08:05.133 }, 00:08:05.133 "memory_domains": [ 00:08:05.133 { 00:08:05.133 "dma_device_id": "system", 00:08:05.133 "dma_device_type": 1 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.133 "dma_device_type": 2 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "dma_device_id": "system", 00:08:05.133 "dma_device_type": 1 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.133 "dma_device_type": 2 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "dma_device_id": "system", 00:08:05.133 "dma_device_type": 1 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.133 "dma_device_type": 2 00:08:05.133 } 00:08:05.133 ], 00:08:05.133 "driver_specific": { 00:08:05.133 "raid": { 00:08:05.133 "uuid": "307c4c72-1d4c-449b-a319-c7424b24c54f", 00:08:05.133 "strip_size_kb": 64, 00:08:05.133 "state": "online", 00:08:05.133 "raid_level": "raid0", 00:08:05.133 "superblock": false, 00:08:05.133 "num_base_bdevs": 3, 00:08:05.133 "num_base_bdevs_discovered": 3, 00:08:05.133 "num_base_bdevs_operational": 3, 00:08:05.133 "base_bdevs_list": [ 00:08:05.133 { 00:08:05.133 "name": "NewBaseBdev", 00:08:05.133 "uuid": "e6bec393-420c-4ff0-b981-72ccfd2078ac", 00:08:05.133 "is_configured": true, 00:08:05.133 "data_offset": 0, 00:08:05.133 "data_size": 65536 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "name": "BaseBdev2", 00:08:05.133 "uuid": "304877b5-88db-4c5b-9ca8-d132f2fc8f01", 00:08:05.133 "is_configured": true, 00:08:05.133 "data_offset": 0, 00:08:05.133 "data_size": 65536 00:08:05.133 }, 00:08:05.133 { 00:08:05.133 "name": "BaseBdev3", 00:08:05.133 "uuid": "3cd2f136-6548-4434-ae27-448a83c9fc58", 00:08:05.133 "is_configured": true, 00:08:05.133 "data_offset": 0, 00:08:05.133 "data_size": 65536 00:08:05.133 } 00:08:05.133 ] 00:08:05.133 } 00:08:05.133 } 00:08:05.133 }' 00:08:05.133 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:05.134 BaseBdev2 00:08:05.134 BaseBdev3' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.134 [2024-11-20 03:14:54.731415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.134 [2024-11-20 03:14:54.731443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.134 [2024-11-20 03:14:54.731524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.134 [2024-11-20 03:14:54.731576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.134 [2024-11-20 03:14:54.731588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63678 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63678 ']' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63678 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.134 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63678 00:08:05.409 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.409 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.409 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63678' 00:08:05.409 killing process with pid 63678 00:08:05.409 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63678 00:08:05.409 [2024-11-20 03:14:54.782707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.409 03:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63678 00:08:05.681 [2024-11-20 03:14:55.083340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.620 ************************************ 00:08:06.620 END TEST raid_state_function_test 00:08:06.620 ************************************ 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.620 00:08:06.620 real 0m10.549s 00:08:06.620 user 0m16.825s 00:08:06.620 sys 0m1.764s 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.620 03:14:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:06.620 03:14:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.620 03:14:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.620 03:14:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.620 ************************************ 00:08:06.620 START TEST raid_state_function_test_sb 00:08:06.620 ************************************ 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.620 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64299 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64299' 00:08:06.880 Process raid pid: 64299 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64299 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64299 ']' 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.880 03:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.880 [2024-11-20 03:14:56.340116] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:06.880 [2024-11-20 03:14:56.340322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.880 [2024-11-20 03:14:56.497182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.140 [2024-11-20 03:14:56.613348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.400 [2024-11-20 03:14:56.813110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.400 [2024-11-20 03:14:56.813247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.660 [2024-11-20 03:14:57.173270] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.660 [2024-11-20 03:14:57.173393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.660 [2024-11-20 03:14:57.173408] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.660 [2024-11-20 03:14:57.173418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.660 [2024-11-20 03:14:57.173425] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:07.660 [2024-11-20 03:14:57.173433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.660 "name": "Existed_Raid", 00:08:07.660 "uuid": "03ccd5f6-d265-4def-bb9d-ca750d631249", 00:08:07.660 "strip_size_kb": 64, 00:08:07.660 "state": "configuring", 00:08:07.660 "raid_level": "raid0", 00:08:07.660 "superblock": true, 00:08:07.660 "num_base_bdevs": 3, 00:08:07.660 "num_base_bdevs_discovered": 0, 00:08:07.660 "num_base_bdevs_operational": 3, 00:08:07.660 "base_bdevs_list": [ 00:08:07.660 { 00:08:07.660 "name": "BaseBdev1", 00:08:07.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.660 "is_configured": false, 00:08:07.660 "data_offset": 0, 00:08:07.660 "data_size": 0 00:08:07.660 }, 00:08:07.660 { 00:08:07.660 "name": "BaseBdev2", 00:08:07.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.660 "is_configured": false, 00:08:07.660 "data_offset": 0, 00:08:07.660 "data_size": 0 00:08:07.660 }, 00:08:07.660 { 00:08:07.660 "name": "BaseBdev3", 00:08:07.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.660 "is_configured": false, 00:08:07.660 "data_offset": 0, 00:08:07.660 "data_size": 0 00:08:07.660 } 00:08:07.660 ] 00:08:07.660 }' 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.660 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.230 [2024-11-20 03:14:57.652405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.230 [2024-11-20 03:14:57.652502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.230 [2024-11-20 03:14:57.660404] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.230 [2024-11-20 03:14:57.660502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.230 [2024-11-20 03:14:57.660532] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.230 [2024-11-20 03:14:57.660555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.230 [2024-11-20 03:14:57.660573] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.230 [2024-11-20 03:14:57.660595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.230 [2024-11-20 03:14:57.703540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.230 BaseBdev1 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.230 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.231 [ 00:08:08.231 { 00:08:08.231 "name": "BaseBdev1", 00:08:08.231 "aliases": [ 00:08:08.231 "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8" 00:08:08.231 ], 00:08:08.231 "product_name": "Malloc disk", 00:08:08.231 "block_size": 512, 00:08:08.231 "num_blocks": 65536, 00:08:08.231 "uuid": "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8", 00:08:08.231 "assigned_rate_limits": { 00:08:08.231 "rw_ios_per_sec": 0, 00:08:08.231 "rw_mbytes_per_sec": 0, 00:08:08.231 "r_mbytes_per_sec": 0, 00:08:08.231 "w_mbytes_per_sec": 0 00:08:08.231 }, 00:08:08.231 "claimed": true, 00:08:08.231 "claim_type": "exclusive_write", 00:08:08.231 "zoned": false, 00:08:08.231 "supported_io_types": { 00:08:08.231 "read": true, 00:08:08.231 "write": true, 00:08:08.231 "unmap": true, 00:08:08.231 "flush": true, 00:08:08.231 "reset": true, 00:08:08.231 "nvme_admin": false, 00:08:08.231 "nvme_io": false, 00:08:08.231 "nvme_io_md": false, 00:08:08.231 "write_zeroes": true, 00:08:08.231 "zcopy": true, 00:08:08.231 "get_zone_info": false, 00:08:08.231 "zone_management": false, 00:08:08.231 "zone_append": false, 00:08:08.231 "compare": false, 00:08:08.231 "compare_and_write": false, 00:08:08.231 "abort": true, 00:08:08.231 "seek_hole": false, 00:08:08.231 "seek_data": false, 00:08:08.231 "copy": true, 00:08:08.231 "nvme_iov_md": false 00:08:08.231 }, 00:08:08.231 "memory_domains": [ 00:08:08.231 { 00:08:08.231 "dma_device_id": "system", 00:08:08.231 "dma_device_type": 1 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.231 "dma_device_type": 2 00:08:08.231 } 00:08:08.231 ], 00:08:08.231 "driver_specific": {} 00:08:08.231 } 00:08:08.231 ] 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.231 "name": "Existed_Raid", 00:08:08.231 "uuid": "075c3098-6be5-448c-aa0b-3a6ca60bc8a8", 00:08:08.231 "strip_size_kb": 64, 00:08:08.231 "state": "configuring", 00:08:08.231 "raid_level": "raid0", 00:08:08.231 "superblock": true, 00:08:08.231 "num_base_bdevs": 3, 00:08:08.231 "num_base_bdevs_discovered": 1, 00:08:08.231 "num_base_bdevs_operational": 3, 00:08:08.231 "base_bdevs_list": [ 00:08:08.231 { 00:08:08.231 "name": "BaseBdev1", 00:08:08.231 "uuid": "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8", 00:08:08.231 "is_configured": true, 00:08:08.231 "data_offset": 2048, 00:08:08.231 "data_size": 63488 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "name": "BaseBdev2", 00:08:08.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.231 "is_configured": false, 00:08:08.231 "data_offset": 0, 00:08:08.231 "data_size": 0 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "name": "BaseBdev3", 00:08:08.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.231 "is_configured": false, 00:08:08.231 "data_offset": 0, 00:08:08.231 "data_size": 0 00:08:08.231 } 00:08:08.231 ] 00:08:08.231 }' 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.231 03:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.801 [2024-11-20 03:14:58.162792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.801 [2024-11-20 03:14:58.162925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.801 [2024-11-20 03:14:58.170828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.801 [2024-11-20 03:14:58.172700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.801 [2024-11-20 03:14:58.172770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.801 [2024-11-20 03:14:58.172798] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.801 [2024-11-20 03:14:58.172820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.801 "name": "Existed_Raid", 00:08:08.801 "uuid": "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b", 00:08:08.801 "strip_size_kb": 64, 00:08:08.801 "state": "configuring", 00:08:08.801 "raid_level": "raid0", 00:08:08.801 "superblock": true, 00:08:08.801 "num_base_bdevs": 3, 00:08:08.801 "num_base_bdevs_discovered": 1, 00:08:08.801 "num_base_bdevs_operational": 3, 00:08:08.801 "base_bdevs_list": [ 00:08:08.801 { 00:08:08.801 "name": "BaseBdev1", 00:08:08.801 "uuid": "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8", 00:08:08.801 "is_configured": true, 00:08:08.801 "data_offset": 2048, 00:08:08.801 "data_size": 63488 00:08:08.801 }, 00:08:08.801 { 00:08:08.801 "name": "BaseBdev2", 00:08:08.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.801 "is_configured": false, 00:08:08.801 "data_offset": 0, 00:08:08.801 "data_size": 0 00:08:08.801 }, 00:08:08.801 { 00:08:08.801 "name": "BaseBdev3", 00:08:08.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.801 "is_configured": false, 00:08:08.801 "data_offset": 0, 00:08:08.801 "data_size": 0 00:08:08.801 } 00:08:08.801 ] 00:08:08.801 }' 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.801 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.061 [2024-11-20 03:14:58.659430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.061 BaseBdev2 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.061 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.061 [ 00:08:09.061 { 00:08:09.061 "name": "BaseBdev2", 00:08:09.061 "aliases": [ 00:08:09.061 "fda49969-b1db-482f-b664-9d5f3b56e624" 00:08:09.061 ], 00:08:09.061 "product_name": "Malloc disk", 00:08:09.061 "block_size": 512, 00:08:09.061 "num_blocks": 65536, 00:08:09.061 "uuid": "fda49969-b1db-482f-b664-9d5f3b56e624", 00:08:09.061 "assigned_rate_limits": { 00:08:09.061 "rw_ios_per_sec": 0, 00:08:09.061 "rw_mbytes_per_sec": 0, 00:08:09.061 "r_mbytes_per_sec": 0, 00:08:09.061 "w_mbytes_per_sec": 0 00:08:09.061 }, 00:08:09.061 "claimed": true, 00:08:09.061 "claim_type": "exclusive_write", 00:08:09.061 "zoned": false, 00:08:09.061 "supported_io_types": { 00:08:09.061 "read": true, 00:08:09.061 "write": true, 00:08:09.061 "unmap": true, 00:08:09.061 "flush": true, 00:08:09.061 "reset": true, 00:08:09.061 "nvme_admin": false, 00:08:09.061 "nvme_io": false, 00:08:09.061 "nvme_io_md": false, 00:08:09.061 "write_zeroes": true, 00:08:09.061 "zcopy": true, 00:08:09.061 "get_zone_info": false, 00:08:09.061 "zone_management": false, 00:08:09.061 "zone_append": false, 00:08:09.061 "compare": false, 00:08:09.320 "compare_and_write": false, 00:08:09.320 "abort": true, 00:08:09.320 "seek_hole": false, 00:08:09.320 "seek_data": false, 00:08:09.320 "copy": true, 00:08:09.320 "nvme_iov_md": false 00:08:09.320 }, 00:08:09.320 "memory_domains": [ 00:08:09.320 { 00:08:09.320 "dma_device_id": "system", 00:08:09.320 "dma_device_type": 1 00:08:09.320 }, 00:08:09.320 { 00:08:09.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.320 "dma_device_type": 2 00:08:09.320 } 00:08:09.320 ], 00:08:09.320 "driver_specific": {} 00:08:09.320 } 00:08:09.320 ] 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.320 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.321 "name": "Existed_Raid", 00:08:09.321 "uuid": "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b", 00:08:09.321 "strip_size_kb": 64, 00:08:09.321 "state": "configuring", 00:08:09.321 "raid_level": "raid0", 00:08:09.321 "superblock": true, 00:08:09.321 "num_base_bdevs": 3, 00:08:09.321 "num_base_bdevs_discovered": 2, 00:08:09.321 "num_base_bdevs_operational": 3, 00:08:09.321 "base_bdevs_list": [ 00:08:09.321 { 00:08:09.321 "name": "BaseBdev1", 00:08:09.321 "uuid": "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8", 00:08:09.321 "is_configured": true, 00:08:09.321 "data_offset": 2048, 00:08:09.321 "data_size": 63488 00:08:09.321 }, 00:08:09.321 { 00:08:09.321 "name": "BaseBdev2", 00:08:09.321 "uuid": "fda49969-b1db-482f-b664-9d5f3b56e624", 00:08:09.321 "is_configured": true, 00:08:09.321 "data_offset": 2048, 00:08:09.321 "data_size": 63488 00:08:09.321 }, 00:08:09.321 { 00:08:09.321 "name": "BaseBdev3", 00:08:09.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.321 "is_configured": false, 00:08:09.321 "data_offset": 0, 00:08:09.321 "data_size": 0 00:08:09.321 } 00:08:09.321 ] 00:08:09.321 }' 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.321 03:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.579 [2024-11-20 03:14:59.174462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.579 [2024-11-20 03:14:59.174781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.579 [2024-11-20 03:14:59.174805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:09.579 [2024-11-20 03:14:59.175098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:09.579 [2024-11-20 03:14:59.175250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.579 [2024-11-20 03:14:59.175264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.579 [2024-11-20 03:14:59.175413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.579 BaseBdev3 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.579 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.579 [ 00:08:09.579 { 00:08:09.579 "name": "BaseBdev3", 00:08:09.579 "aliases": [ 00:08:09.579 "658efacf-0ad8-4998-bc45-1f1f97c35950" 00:08:09.579 ], 00:08:09.579 "product_name": "Malloc disk", 00:08:09.579 "block_size": 512, 00:08:09.579 "num_blocks": 65536, 00:08:09.579 "uuid": "658efacf-0ad8-4998-bc45-1f1f97c35950", 00:08:09.579 "assigned_rate_limits": { 00:08:09.579 "rw_ios_per_sec": 0, 00:08:09.579 "rw_mbytes_per_sec": 0, 00:08:09.579 "r_mbytes_per_sec": 0, 00:08:09.579 "w_mbytes_per_sec": 0 00:08:09.579 }, 00:08:09.579 "claimed": true, 00:08:09.579 "claim_type": "exclusive_write", 00:08:09.579 "zoned": false, 00:08:09.579 "supported_io_types": { 00:08:09.579 "read": true, 00:08:09.579 "write": true, 00:08:09.579 "unmap": true, 00:08:09.579 "flush": true, 00:08:09.579 "reset": true, 00:08:09.579 "nvme_admin": false, 00:08:09.579 "nvme_io": false, 00:08:09.579 "nvme_io_md": false, 00:08:09.579 "write_zeroes": true, 00:08:09.579 "zcopy": true, 00:08:09.579 "get_zone_info": false, 00:08:09.579 "zone_management": false, 00:08:09.579 "zone_append": false, 00:08:09.579 "compare": false, 00:08:09.837 "compare_and_write": false, 00:08:09.837 "abort": true, 00:08:09.837 "seek_hole": false, 00:08:09.837 "seek_data": false, 00:08:09.837 "copy": true, 00:08:09.837 "nvme_iov_md": false 00:08:09.837 }, 00:08:09.837 "memory_domains": [ 00:08:09.837 { 00:08:09.837 "dma_device_id": "system", 00:08:09.837 "dma_device_type": 1 00:08:09.837 }, 00:08:09.837 { 00:08:09.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.837 "dma_device_type": 2 00:08:09.837 } 00:08:09.837 ], 00:08:09.837 "driver_specific": {} 00:08:09.837 } 00:08:09.837 ] 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.837 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.838 "name": "Existed_Raid", 00:08:09.838 "uuid": "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b", 00:08:09.838 "strip_size_kb": 64, 00:08:09.838 "state": "online", 00:08:09.838 "raid_level": "raid0", 00:08:09.838 "superblock": true, 00:08:09.838 "num_base_bdevs": 3, 00:08:09.838 "num_base_bdevs_discovered": 3, 00:08:09.838 "num_base_bdevs_operational": 3, 00:08:09.838 "base_bdevs_list": [ 00:08:09.838 { 00:08:09.838 "name": "BaseBdev1", 00:08:09.838 "uuid": "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8", 00:08:09.838 "is_configured": true, 00:08:09.838 "data_offset": 2048, 00:08:09.838 "data_size": 63488 00:08:09.838 }, 00:08:09.838 { 00:08:09.838 "name": "BaseBdev2", 00:08:09.838 "uuid": "fda49969-b1db-482f-b664-9d5f3b56e624", 00:08:09.838 "is_configured": true, 00:08:09.838 "data_offset": 2048, 00:08:09.838 "data_size": 63488 00:08:09.838 }, 00:08:09.838 { 00:08:09.838 "name": "BaseBdev3", 00:08:09.838 "uuid": "658efacf-0ad8-4998-bc45-1f1f97c35950", 00:08:09.838 "is_configured": true, 00:08:09.838 "data_offset": 2048, 00:08:09.838 "data_size": 63488 00:08:09.838 } 00:08:09.838 ] 00:08:09.838 }' 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.838 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.097 [2024-11-20 03:14:59.673986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.097 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.097 "name": "Existed_Raid", 00:08:10.097 "aliases": [ 00:08:10.097 "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b" 00:08:10.097 ], 00:08:10.097 "product_name": "Raid Volume", 00:08:10.097 "block_size": 512, 00:08:10.097 "num_blocks": 190464, 00:08:10.097 "uuid": "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b", 00:08:10.097 "assigned_rate_limits": { 00:08:10.097 "rw_ios_per_sec": 0, 00:08:10.097 "rw_mbytes_per_sec": 0, 00:08:10.097 "r_mbytes_per_sec": 0, 00:08:10.097 "w_mbytes_per_sec": 0 00:08:10.097 }, 00:08:10.097 "claimed": false, 00:08:10.098 "zoned": false, 00:08:10.098 "supported_io_types": { 00:08:10.098 "read": true, 00:08:10.098 "write": true, 00:08:10.098 "unmap": true, 00:08:10.098 "flush": true, 00:08:10.098 "reset": true, 00:08:10.098 "nvme_admin": false, 00:08:10.098 "nvme_io": false, 00:08:10.098 "nvme_io_md": false, 00:08:10.098 "write_zeroes": true, 00:08:10.098 "zcopy": false, 00:08:10.098 "get_zone_info": false, 00:08:10.098 "zone_management": false, 00:08:10.098 "zone_append": false, 00:08:10.098 "compare": false, 00:08:10.098 "compare_and_write": false, 00:08:10.098 "abort": false, 00:08:10.098 "seek_hole": false, 00:08:10.098 "seek_data": false, 00:08:10.098 "copy": false, 00:08:10.098 "nvme_iov_md": false 00:08:10.098 }, 00:08:10.098 "memory_domains": [ 00:08:10.098 { 00:08:10.098 "dma_device_id": "system", 00:08:10.098 "dma_device_type": 1 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.098 "dma_device_type": 2 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "dma_device_id": "system", 00:08:10.098 "dma_device_type": 1 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.098 "dma_device_type": 2 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "dma_device_id": "system", 00:08:10.098 "dma_device_type": 1 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.098 "dma_device_type": 2 00:08:10.098 } 00:08:10.098 ], 00:08:10.098 "driver_specific": { 00:08:10.098 "raid": { 00:08:10.098 "uuid": "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b", 00:08:10.098 "strip_size_kb": 64, 00:08:10.098 "state": "online", 00:08:10.098 "raid_level": "raid0", 00:08:10.098 "superblock": true, 00:08:10.098 "num_base_bdevs": 3, 00:08:10.098 "num_base_bdevs_discovered": 3, 00:08:10.098 "num_base_bdevs_operational": 3, 00:08:10.098 "base_bdevs_list": [ 00:08:10.098 { 00:08:10.098 "name": "BaseBdev1", 00:08:10.098 "uuid": "13bd2d88-d1e5-4d0c-bc77-1e5f3a11daa8", 00:08:10.098 "is_configured": true, 00:08:10.098 "data_offset": 2048, 00:08:10.098 "data_size": 63488 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "name": "BaseBdev2", 00:08:10.098 "uuid": "fda49969-b1db-482f-b664-9d5f3b56e624", 00:08:10.098 "is_configured": true, 00:08:10.098 "data_offset": 2048, 00:08:10.098 "data_size": 63488 00:08:10.098 }, 00:08:10.098 { 00:08:10.098 "name": "BaseBdev3", 00:08:10.098 "uuid": "658efacf-0ad8-4998-bc45-1f1f97c35950", 00:08:10.098 "is_configured": true, 00:08:10.098 "data_offset": 2048, 00:08:10.098 "data_size": 63488 00:08:10.098 } 00:08:10.098 ] 00:08:10.098 } 00:08:10.098 } 00:08:10.098 }' 00:08:10.098 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.358 BaseBdev2 00:08:10.358 BaseBdev3' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.358 03:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.358 [2024-11-20 03:14:59.977192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.358 [2024-11-20 03:14:59.977225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.358 [2024-11-20 03:14:59.977281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.617 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.617 "name": "Existed_Raid", 00:08:10.617 "uuid": "5aaa2253-fc77-4616-a8c8-20ae1d2eff6b", 00:08:10.617 "strip_size_kb": 64, 00:08:10.617 "state": "offline", 00:08:10.617 "raid_level": "raid0", 00:08:10.617 "superblock": true, 00:08:10.617 "num_base_bdevs": 3, 00:08:10.617 "num_base_bdevs_discovered": 2, 00:08:10.617 "num_base_bdevs_operational": 2, 00:08:10.617 "base_bdevs_list": [ 00:08:10.618 { 00:08:10.618 "name": null, 00:08:10.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.618 "is_configured": false, 00:08:10.618 "data_offset": 0, 00:08:10.618 "data_size": 63488 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "name": "BaseBdev2", 00:08:10.618 "uuid": "fda49969-b1db-482f-b664-9d5f3b56e624", 00:08:10.618 "is_configured": true, 00:08:10.618 "data_offset": 2048, 00:08:10.618 "data_size": 63488 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "name": "BaseBdev3", 00:08:10.618 "uuid": "658efacf-0ad8-4998-bc45-1f1f97c35950", 00:08:10.618 "is_configured": true, 00:08:10.618 "data_offset": 2048, 00:08:10.618 "data_size": 63488 00:08:10.618 } 00:08:10.618 ] 00:08:10.618 }' 00:08:10.618 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.618 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.187 [2024-11-20 03:15:00.573007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.187 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.187 [2024-11-20 03:15:00.726433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:11.187 [2024-11-20 03:15:00.726492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 BaseBdev2 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 [ 00:08:11.447 { 00:08:11.447 "name": "BaseBdev2", 00:08:11.447 "aliases": [ 00:08:11.447 "76d47c9b-a17a-4343-99ce-6ab39baffbcf" 00:08:11.447 ], 00:08:11.447 "product_name": "Malloc disk", 00:08:11.447 "block_size": 512, 00:08:11.447 "num_blocks": 65536, 00:08:11.447 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:11.447 "assigned_rate_limits": { 00:08:11.447 "rw_ios_per_sec": 0, 00:08:11.447 "rw_mbytes_per_sec": 0, 00:08:11.447 "r_mbytes_per_sec": 0, 00:08:11.447 "w_mbytes_per_sec": 0 00:08:11.447 }, 00:08:11.447 "claimed": false, 00:08:11.447 "zoned": false, 00:08:11.447 "supported_io_types": { 00:08:11.447 "read": true, 00:08:11.447 "write": true, 00:08:11.447 "unmap": true, 00:08:11.447 "flush": true, 00:08:11.447 "reset": true, 00:08:11.447 "nvme_admin": false, 00:08:11.447 "nvme_io": false, 00:08:11.447 "nvme_io_md": false, 00:08:11.447 "write_zeroes": true, 00:08:11.447 "zcopy": true, 00:08:11.447 "get_zone_info": false, 00:08:11.447 "zone_management": false, 00:08:11.447 "zone_append": false, 00:08:11.447 "compare": false, 00:08:11.447 "compare_and_write": false, 00:08:11.447 "abort": true, 00:08:11.447 "seek_hole": false, 00:08:11.447 "seek_data": false, 00:08:11.447 "copy": true, 00:08:11.447 "nvme_iov_md": false 00:08:11.447 }, 00:08:11.447 "memory_domains": [ 00:08:11.447 { 00:08:11.447 "dma_device_id": "system", 00:08:11.447 "dma_device_type": 1 00:08:11.447 }, 00:08:11.447 { 00:08:11.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.447 "dma_device_type": 2 00:08:11.447 } 00:08:11.447 ], 00:08:11.447 "driver_specific": {} 00:08:11.447 } 00:08:11.447 ] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 BaseBdev3 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.447 03:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.447 [ 00:08:11.447 { 00:08:11.447 "name": "BaseBdev3", 00:08:11.447 "aliases": [ 00:08:11.447 "95e0e758-dfac-4c27-b499-13fd1403cacb" 00:08:11.447 ], 00:08:11.447 "product_name": "Malloc disk", 00:08:11.447 "block_size": 512, 00:08:11.447 "num_blocks": 65536, 00:08:11.447 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:11.447 "assigned_rate_limits": { 00:08:11.447 "rw_ios_per_sec": 0, 00:08:11.447 "rw_mbytes_per_sec": 0, 00:08:11.447 "r_mbytes_per_sec": 0, 00:08:11.447 "w_mbytes_per_sec": 0 00:08:11.447 }, 00:08:11.447 "claimed": false, 00:08:11.447 "zoned": false, 00:08:11.447 "supported_io_types": { 00:08:11.447 "read": true, 00:08:11.447 "write": true, 00:08:11.447 "unmap": true, 00:08:11.447 "flush": true, 00:08:11.447 "reset": true, 00:08:11.447 "nvme_admin": false, 00:08:11.447 "nvme_io": false, 00:08:11.447 "nvme_io_md": false, 00:08:11.447 "write_zeroes": true, 00:08:11.447 "zcopy": true, 00:08:11.447 "get_zone_info": false, 00:08:11.447 "zone_management": false, 00:08:11.447 "zone_append": false, 00:08:11.447 "compare": false, 00:08:11.447 "compare_and_write": false, 00:08:11.447 "abort": true, 00:08:11.447 "seek_hole": false, 00:08:11.447 "seek_data": false, 00:08:11.447 "copy": true, 00:08:11.447 "nvme_iov_md": false 00:08:11.447 }, 00:08:11.447 "memory_domains": [ 00:08:11.447 { 00:08:11.447 "dma_device_id": "system", 00:08:11.447 "dma_device_type": 1 00:08:11.448 }, 00:08:11.448 { 00:08:11.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.448 "dma_device_type": 2 00:08:11.448 } 00:08:11.448 ], 00:08:11.448 "driver_specific": {} 00:08:11.448 } 00:08:11.448 ] 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 [2024-11-20 03:15:01.030666] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.448 [2024-11-20 03:15:01.030715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.448 [2024-11-20 03:15:01.030739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.448 [2024-11-20 03:15:01.032655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.708 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.708 "name": "Existed_Raid", 00:08:11.708 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:11.708 "strip_size_kb": 64, 00:08:11.708 "state": "configuring", 00:08:11.708 "raid_level": "raid0", 00:08:11.708 "superblock": true, 00:08:11.708 "num_base_bdevs": 3, 00:08:11.708 "num_base_bdevs_discovered": 2, 00:08:11.708 "num_base_bdevs_operational": 3, 00:08:11.708 "base_bdevs_list": [ 00:08:11.708 { 00:08:11.708 "name": "BaseBdev1", 00:08:11.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.708 "is_configured": false, 00:08:11.708 "data_offset": 0, 00:08:11.708 "data_size": 0 00:08:11.708 }, 00:08:11.708 { 00:08:11.708 "name": "BaseBdev2", 00:08:11.708 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:11.708 "is_configured": true, 00:08:11.708 "data_offset": 2048, 00:08:11.708 "data_size": 63488 00:08:11.708 }, 00:08:11.708 { 00:08:11.708 "name": "BaseBdev3", 00:08:11.708 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:11.708 "is_configured": true, 00:08:11.708 "data_offset": 2048, 00:08:11.708 "data_size": 63488 00:08:11.708 } 00:08:11.708 ] 00:08:11.708 }' 00:08:11.708 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.708 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.967 [2024-11-20 03:15:01.453902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.967 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.967 "name": "Existed_Raid", 00:08:11.967 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:11.967 "strip_size_kb": 64, 00:08:11.967 "state": "configuring", 00:08:11.967 "raid_level": "raid0", 00:08:11.967 "superblock": true, 00:08:11.967 "num_base_bdevs": 3, 00:08:11.967 "num_base_bdevs_discovered": 1, 00:08:11.967 "num_base_bdevs_operational": 3, 00:08:11.967 "base_bdevs_list": [ 00:08:11.967 { 00:08:11.967 "name": "BaseBdev1", 00:08:11.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.967 "is_configured": false, 00:08:11.967 "data_offset": 0, 00:08:11.967 "data_size": 0 00:08:11.967 }, 00:08:11.967 { 00:08:11.967 "name": null, 00:08:11.967 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:11.967 "is_configured": false, 00:08:11.968 "data_offset": 0, 00:08:11.968 "data_size": 63488 00:08:11.968 }, 00:08:11.968 { 00:08:11.968 "name": "BaseBdev3", 00:08:11.968 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:11.968 "is_configured": true, 00:08:11.968 "data_offset": 2048, 00:08:11.968 "data_size": 63488 00:08:11.968 } 00:08:11.968 ] 00:08:11.968 }' 00:08:11.968 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.968 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.227 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.227 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:12.227 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.227 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.485 [2024-11-20 03:15:01.934845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.485 BaseBdev1 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.485 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.486 [ 00:08:12.486 { 00:08:12.486 "name": "BaseBdev1", 00:08:12.486 "aliases": [ 00:08:12.486 "d8c4ef28-d253-4934-8981-33e77bef66fe" 00:08:12.486 ], 00:08:12.486 "product_name": "Malloc disk", 00:08:12.486 "block_size": 512, 00:08:12.486 "num_blocks": 65536, 00:08:12.486 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:12.486 "assigned_rate_limits": { 00:08:12.486 "rw_ios_per_sec": 0, 00:08:12.486 "rw_mbytes_per_sec": 0, 00:08:12.486 "r_mbytes_per_sec": 0, 00:08:12.486 "w_mbytes_per_sec": 0 00:08:12.486 }, 00:08:12.486 "claimed": true, 00:08:12.486 "claim_type": "exclusive_write", 00:08:12.486 "zoned": false, 00:08:12.486 "supported_io_types": { 00:08:12.486 "read": true, 00:08:12.486 "write": true, 00:08:12.486 "unmap": true, 00:08:12.486 "flush": true, 00:08:12.486 "reset": true, 00:08:12.486 "nvme_admin": false, 00:08:12.486 "nvme_io": false, 00:08:12.486 "nvme_io_md": false, 00:08:12.486 "write_zeroes": true, 00:08:12.486 "zcopy": true, 00:08:12.486 "get_zone_info": false, 00:08:12.486 "zone_management": false, 00:08:12.486 "zone_append": false, 00:08:12.486 "compare": false, 00:08:12.486 "compare_and_write": false, 00:08:12.486 "abort": true, 00:08:12.486 "seek_hole": false, 00:08:12.486 "seek_data": false, 00:08:12.486 "copy": true, 00:08:12.486 "nvme_iov_md": false 00:08:12.486 }, 00:08:12.486 "memory_domains": [ 00:08:12.486 { 00:08:12.486 "dma_device_id": "system", 00:08:12.486 "dma_device_type": 1 00:08:12.486 }, 00:08:12.486 { 00:08:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.486 "dma_device_type": 2 00:08:12.486 } 00:08:12.486 ], 00:08:12.486 "driver_specific": {} 00:08:12.486 } 00:08:12.486 ] 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.486 03:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.486 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.486 "name": "Existed_Raid", 00:08:12.486 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:12.486 "strip_size_kb": 64, 00:08:12.486 "state": "configuring", 00:08:12.486 "raid_level": "raid0", 00:08:12.486 "superblock": true, 00:08:12.486 "num_base_bdevs": 3, 00:08:12.486 "num_base_bdevs_discovered": 2, 00:08:12.486 "num_base_bdevs_operational": 3, 00:08:12.486 "base_bdevs_list": [ 00:08:12.486 { 00:08:12.486 "name": "BaseBdev1", 00:08:12.486 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:12.486 "is_configured": true, 00:08:12.486 "data_offset": 2048, 00:08:12.486 "data_size": 63488 00:08:12.486 }, 00:08:12.486 { 00:08:12.486 "name": null, 00:08:12.486 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:12.486 "is_configured": false, 00:08:12.486 "data_offset": 0, 00:08:12.486 "data_size": 63488 00:08:12.486 }, 00:08:12.486 { 00:08:12.486 "name": "BaseBdev3", 00:08:12.486 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:12.486 "is_configured": true, 00:08:12.486 "data_offset": 2048, 00:08:12.486 "data_size": 63488 00:08:12.486 } 00:08:12.486 ] 00:08:12.486 }' 00:08:12.486 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.486 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.055 [2024-11-20 03:15:02.458025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.055 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.056 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.056 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.056 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.056 "name": "Existed_Raid", 00:08:13.056 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:13.056 "strip_size_kb": 64, 00:08:13.056 "state": "configuring", 00:08:13.056 "raid_level": "raid0", 00:08:13.056 "superblock": true, 00:08:13.056 "num_base_bdevs": 3, 00:08:13.056 "num_base_bdevs_discovered": 1, 00:08:13.056 "num_base_bdevs_operational": 3, 00:08:13.056 "base_bdevs_list": [ 00:08:13.056 { 00:08:13.056 "name": "BaseBdev1", 00:08:13.056 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:13.056 "is_configured": true, 00:08:13.056 "data_offset": 2048, 00:08:13.056 "data_size": 63488 00:08:13.056 }, 00:08:13.056 { 00:08:13.056 "name": null, 00:08:13.056 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:13.056 "is_configured": false, 00:08:13.056 "data_offset": 0, 00:08:13.056 "data_size": 63488 00:08:13.056 }, 00:08:13.056 { 00:08:13.056 "name": null, 00:08:13.056 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:13.056 "is_configured": false, 00:08:13.056 "data_offset": 0, 00:08:13.056 "data_size": 63488 00:08:13.056 } 00:08:13.056 ] 00:08:13.056 }' 00:08:13.056 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.056 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:13.315 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.315 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.315 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.575 [2024-11-20 03:15:02.985174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.575 03:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.575 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.575 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.575 "name": "Existed_Raid", 00:08:13.575 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:13.575 "strip_size_kb": 64, 00:08:13.575 "state": "configuring", 00:08:13.575 "raid_level": "raid0", 00:08:13.575 "superblock": true, 00:08:13.575 "num_base_bdevs": 3, 00:08:13.575 "num_base_bdevs_discovered": 2, 00:08:13.575 "num_base_bdevs_operational": 3, 00:08:13.575 "base_bdevs_list": [ 00:08:13.575 { 00:08:13.575 "name": "BaseBdev1", 00:08:13.575 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:13.575 "is_configured": true, 00:08:13.575 "data_offset": 2048, 00:08:13.575 "data_size": 63488 00:08:13.575 }, 00:08:13.575 { 00:08:13.575 "name": null, 00:08:13.575 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:13.575 "is_configured": false, 00:08:13.575 "data_offset": 0, 00:08:13.575 "data_size": 63488 00:08:13.575 }, 00:08:13.575 { 00:08:13.575 "name": "BaseBdev3", 00:08:13.575 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:13.575 "is_configured": true, 00:08:13.575 "data_offset": 2048, 00:08:13.575 "data_size": 63488 00:08:13.575 } 00:08:13.575 ] 00:08:13.575 }' 00:08:13.575 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.575 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.835 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.835 [2024-11-20 03:15:03.444373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.095 "name": "Existed_Raid", 00:08:14.095 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:14.095 "strip_size_kb": 64, 00:08:14.095 "state": "configuring", 00:08:14.095 "raid_level": "raid0", 00:08:14.095 "superblock": true, 00:08:14.095 "num_base_bdevs": 3, 00:08:14.095 "num_base_bdevs_discovered": 1, 00:08:14.095 "num_base_bdevs_operational": 3, 00:08:14.095 "base_bdevs_list": [ 00:08:14.095 { 00:08:14.095 "name": null, 00:08:14.095 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:14.095 "is_configured": false, 00:08:14.095 "data_offset": 0, 00:08:14.095 "data_size": 63488 00:08:14.095 }, 00:08:14.095 { 00:08:14.095 "name": null, 00:08:14.095 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:14.095 "is_configured": false, 00:08:14.095 "data_offset": 0, 00:08:14.095 "data_size": 63488 00:08:14.095 }, 00:08:14.095 { 00:08:14.095 "name": "BaseBdev3", 00:08:14.095 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:14.095 "is_configured": true, 00:08:14.095 "data_offset": 2048, 00:08:14.095 "data_size": 63488 00:08:14.095 } 00:08:14.095 ] 00:08:14.095 }' 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.095 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.384 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.384 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.384 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.384 03:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.384 03:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.664 [2024-11-20 03:15:04.027480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.664 "name": "Existed_Raid", 00:08:14.664 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:14.664 "strip_size_kb": 64, 00:08:14.664 "state": "configuring", 00:08:14.664 "raid_level": "raid0", 00:08:14.664 "superblock": true, 00:08:14.664 "num_base_bdevs": 3, 00:08:14.664 "num_base_bdevs_discovered": 2, 00:08:14.664 "num_base_bdevs_operational": 3, 00:08:14.664 "base_bdevs_list": [ 00:08:14.664 { 00:08:14.664 "name": null, 00:08:14.664 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:14.664 "is_configured": false, 00:08:14.664 "data_offset": 0, 00:08:14.664 "data_size": 63488 00:08:14.664 }, 00:08:14.664 { 00:08:14.664 "name": "BaseBdev2", 00:08:14.664 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:14.664 "is_configured": true, 00:08:14.664 "data_offset": 2048, 00:08:14.664 "data_size": 63488 00:08:14.664 }, 00:08:14.664 { 00:08:14.664 "name": "BaseBdev3", 00:08:14.664 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:14.664 "is_configured": true, 00:08:14.664 "data_offset": 2048, 00:08:14.664 "data_size": 63488 00:08:14.664 } 00:08:14.664 ] 00:08:14.664 }' 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.664 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d8c4ef28-d253-4934-8981-33e77bef66fe 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.183 [2024-11-20 03:15:04.576976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:15.183 [2024-11-20 03:15:04.577184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:15.183 [2024-11-20 03:15:04.577200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:15.183 [2024-11-20 03:15:04.577429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:15.183 [2024-11-20 03:15:04.577586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:15.183 [2024-11-20 03:15:04.577596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:15.183 [2024-11-20 03:15:04.577752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.183 NewBaseBdev 00:08:15.183 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.183 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:15.183 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:15.183 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.183 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:15.183 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.184 [ 00:08:15.184 { 00:08:15.184 "name": "NewBaseBdev", 00:08:15.184 "aliases": [ 00:08:15.184 "d8c4ef28-d253-4934-8981-33e77bef66fe" 00:08:15.184 ], 00:08:15.184 "product_name": "Malloc disk", 00:08:15.184 "block_size": 512, 00:08:15.184 "num_blocks": 65536, 00:08:15.184 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:15.184 "assigned_rate_limits": { 00:08:15.184 "rw_ios_per_sec": 0, 00:08:15.184 "rw_mbytes_per_sec": 0, 00:08:15.184 "r_mbytes_per_sec": 0, 00:08:15.184 "w_mbytes_per_sec": 0 00:08:15.184 }, 00:08:15.184 "claimed": true, 00:08:15.184 "claim_type": "exclusive_write", 00:08:15.184 "zoned": false, 00:08:15.184 "supported_io_types": { 00:08:15.184 "read": true, 00:08:15.184 "write": true, 00:08:15.184 "unmap": true, 00:08:15.184 "flush": true, 00:08:15.184 "reset": true, 00:08:15.184 "nvme_admin": false, 00:08:15.184 "nvme_io": false, 00:08:15.184 "nvme_io_md": false, 00:08:15.184 "write_zeroes": true, 00:08:15.184 "zcopy": true, 00:08:15.184 "get_zone_info": false, 00:08:15.184 "zone_management": false, 00:08:15.184 "zone_append": false, 00:08:15.184 "compare": false, 00:08:15.184 "compare_and_write": false, 00:08:15.184 "abort": true, 00:08:15.184 "seek_hole": false, 00:08:15.184 "seek_data": false, 00:08:15.184 "copy": true, 00:08:15.184 "nvme_iov_md": false 00:08:15.184 }, 00:08:15.184 "memory_domains": [ 00:08:15.184 { 00:08:15.184 "dma_device_id": "system", 00:08:15.184 "dma_device_type": 1 00:08:15.184 }, 00:08:15.184 { 00:08:15.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.184 "dma_device_type": 2 00:08:15.184 } 00:08:15.184 ], 00:08:15.184 "driver_specific": {} 00:08:15.184 } 00:08:15.184 ] 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.184 "name": "Existed_Raid", 00:08:15.184 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:15.184 "strip_size_kb": 64, 00:08:15.184 "state": "online", 00:08:15.184 "raid_level": "raid0", 00:08:15.184 "superblock": true, 00:08:15.184 "num_base_bdevs": 3, 00:08:15.184 "num_base_bdevs_discovered": 3, 00:08:15.184 "num_base_bdevs_operational": 3, 00:08:15.184 "base_bdevs_list": [ 00:08:15.184 { 00:08:15.184 "name": "NewBaseBdev", 00:08:15.184 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:15.184 "is_configured": true, 00:08:15.184 "data_offset": 2048, 00:08:15.184 "data_size": 63488 00:08:15.184 }, 00:08:15.184 { 00:08:15.184 "name": "BaseBdev2", 00:08:15.184 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:15.184 "is_configured": true, 00:08:15.184 "data_offset": 2048, 00:08:15.184 "data_size": 63488 00:08:15.184 }, 00:08:15.184 { 00:08:15.184 "name": "BaseBdev3", 00:08:15.184 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:15.184 "is_configured": true, 00:08:15.184 "data_offset": 2048, 00:08:15.184 "data_size": 63488 00:08:15.184 } 00:08:15.184 ] 00:08:15.184 }' 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.184 03:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.444 [2024-11-20 03:15:05.036551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.444 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.444 "name": "Existed_Raid", 00:08:15.444 "aliases": [ 00:08:15.444 "45f1001b-f4ac-42ad-bdde-c92756b6f1f5" 00:08:15.444 ], 00:08:15.444 "product_name": "Raid Volume", 00:08:15.444 "block_size": 512, 00:08:15.444 "num_blocks": 190464, 00:08:15.444 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:15.444 "assigned_rate_limits": { 00:08:15.444 "rw_ios_per_sec": 0, 00:08:15.444 "rw_mbytes_per_sec": 0, 00:08:15.444 "r_mbytes_per_sec": 0, 00:08:15.444 "w_mbytes_per_sec": 0 00:08:15.444 }, 00:08:15.444 "claimed": false, 00:08:15.444 "zoned": false, 00:08:15.444 "supported_io_types": { 00:08:15.444 "read": true, 00:08:15.444 "write": true, 00:08:15.444 "unmap": true, 00:08:15.444 "flush": true, 00:08:15.444 "reset": true, 00:08:15.444 "nvme_admin": false, 00:08:15.444 "nvme_io": false, 00:08:15.444 "nvme_io_md": false, 00:08:15.444 "write_zeroes": true, 00:08:15.444 "zcopy": false, 00:08:15.444 "get_zone_info": false, 00:08:15.444 "zone_management": false, 00:08:15.444 "zone_append": false, 00:08:15.444 "compare": false, 00:08:15.444 "compare_and_write": false, 00:08:15.444 "abort": false, 00:08:15.444 "seek_hole": false, 00:08:15.444 "seek_data": false, 00:08:15.444 "copy": false, 00:08:15.444 "nvme_iov_md": false 00:08:15.444 }, 00:08:15.444 "memory_domains": [ 00:08:15.444 { 00:08:15.444 "dma_device_id": "system", 00:08:15.444 "dma_device_type": 1 00:08:15.444 }, 00:08:15.444 { 00:08:15.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.444 "dma_device_type": 2 00:08:15.444 }, 00:08:15.444 { 00:08:15.445 "dma_device_id": "system", 00:08:15.445 "dma_device_type": 1 00:08:15.445 }, 00:08:15.445 { 00:08:15.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.445 "dma_device_type": 2 00:08:15.445 }, 00:08:15.445 { 00:08:15.445 "dma_device_id": "system", 00:08:15.445 "dma_device_type": 1 00:08:15.445 }, 00:08:15.445 { 00:08:15.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.445 "dma_device_type": 2 00:08:15.445 } 00:08:15.445 ], 00:08:15.445 "driver_specific": { 00:08:15.445 "raid": { 00:08:15.445 "uuid": "45f1001b-f4ac-42ad-bdde-c92756b6f1f5", 00:08:15.445 "strip_size_kb": 64, 00:08:15.445 "state": "online", 00:08:15.445 "raid_level": "raid0", 00:08:15.445 "superblock": true, 00:08:15.445 "num_base_bdevs": 3, 00:08:15.445 "num_base_bdevs_discovered": 3, 00:08:15.445 "num_base_bdevs_operational": 3, 00:08:15.445 "base_bdevs_list": [ 00:08:15.445 { 00:08:15.445 "name": "NewBaseBdev", 00:08:15.445 "uuid": "d8c4ef28-d253-4934-8981-33e77bef66fe", 00:08:15.445 "is_configured": true, 00:08:15.445 "data_offset": 2048, 00:08:15.445 "data_size": 63488 00:08:15.445 }, 00:08:15.445 { 00:08:15.445 "name": "BaseBdev2", 00:08:15.445 "uuid": "76d47c9b-a17a-4343-99ce-6ab39baffbcf", 00:08:15.445 "is_configured": true, 00:08:15.445 "data_offset": 2048, 00:08:15.445 "data_size": 63488 00:08:15.445 }, 00:08:15.445 { 00:08:15.445 "name": "BaseBdev3", 00:08:15.445 "uuid": "95e0e758-dfac-4c27-b499-13fd1403cacb", 00:08:15.445 "is_configured": true, 00:08:15.445 "data_offset": 2048, 00:08:15.445 "data_size": 63488 00:08:15.445 } 00:08:15.445 ] 00:08:15.445 } 00:08:15.445 } 00:08:15.445 }' 00:08:15.445 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:15.705 BaseBdev2 00:08:15.705 BaseBdev3' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.705 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.706 [2024-11-20 03:15:05.283845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.706 [2024-11-20 03:15:05.283878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.706 [2024-11-20 03:15:05.283964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.706 [2024-11-20 03:15:05.284021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.706 [2024-11-20 03:15:05.284040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64299 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64299 ']' 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64299 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64299 00:08:15.706 killing process with pid 64299 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64299' 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64299 00:08:15.706 [2024-11-20 03:15:05.330106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.706 03:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64299 00:08:16.275 [2024-11-20 03:15:05.632478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.216 ************************************ 00:08:17.216 END TEST raid_state_function_test_sb 00:08:17.216 ************************************ 00:08:17.216 03:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.216 00:08:17.216 real 0m10.491s 00:08:17.216 user 0m16.698s 00:08:17.216 sys 0m1.862s 00:08:17.216 03:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.216 03:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.216 03:15:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:17.216 03:15:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.216 03:15:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.216 03:15:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.216 ************************************ 00:08:17.216 START TEST raid_superblock_test 00:08:17.216 ************************************ 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64925 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64925 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64925 ']' 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.216 03:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.476 [2024-11-20 03:15:06.891199] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:17.476 [2024-11-20 03:15:06.891325] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64925 ] 00:08:17.476 [2024-11-20 03:15:07.045211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.736 [2024-11-20 03:15:07.161962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.996 [2024-11-20 03:15:07.371202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.996 [2024-11-20 03:15:07.371241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.257 malloc1 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.257 [2024-11-20 03:15:07.779734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.257 [2024-11-20 03:15:07.779813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.257 [2024-11-20 03:15:07.779855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.257 [2024-11-20 03:15:07.779865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.257 [2024-11-20 03:15:07.781975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.257 [2024-11-20 03:15:07.782012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.257 pt1 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.257 malloc2 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.257 [2024-11-20 03:15:07.833781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.257 [2024-11-20 03:15:07.833841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.257 [2024-11-20 03:15:07.833864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.257 [2024-11-20 03:15:07.833873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.257 [2024-11-20 03:15:07.835949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.257 [2024-11-20 03:15:07.835985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.257 pt2 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.257 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 malloc3 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 [2024-11-20 03:15:07.898912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:18.518 [2024-11-20 03:15:07.898965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.518 [2024-11-20 03:15:07.898986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:18.518 [2024-11-20 03:15:07.898996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.518 [2024-11-20 03:15:07.901090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.518 [2024-11-20 03:15:07.901126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:18.518 pt3 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 [2024-11-20 03:15:07.910940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.518 [2024-11-20 03:15:07.912744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.518 [2024-11-20 03:15:07.912807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:18.518 [2024-11-20 03:15:07.912947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.518 [2024-11-20 03:15:07.912960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:18.518 [2024-11-20 03:15:07.913218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:18.518 [2024-11-20 03:15:07.913397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.518 [2024-11-20 03:15:07.913414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:18.518 [2024-11-20 03:15:07.913575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.518 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.518 "name": "raid_bdev1", 00:08:18.518 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:18.518 "strip_size_kb": 64, 00:08:18.518 "state": "online", 00:08:18.518 "raid_level": "raid0", 00:08:18.518 "superblock": true, 00:08:18.518 "num_base_bdevs": 3, 00:08:18.518 "num_base_bdevs_discovered": 3, 00:08:18.518 "num_base_bdevs_operational": 3, 00:08:18.518 "base_bdevs_list": [ 00:08:18.518 { 00:08:18.519 "name": "pt1", 00:08:18.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.519 "is_configured": true, 00:08:18.519 "data_offset": 2048, 00:08:18.519 "data_size": 63488 00:08:18.519 }, 00:08:18.519 { 00:08:18.519 "name": "pt2", 00:08:18.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.519 "is_configured": true, 00:08:18.519 "data_offset": 2048, 00:08:18.519 "data_size": 63488 00:08:18.519 }, 00:08:18.519 { 00:08:18.519 "name": "pt3", 00:08:18.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.519 "is_configured": true, 00:08:18.519 "data_offset": 2048, 00:08:18.519 "data_size": 63488 00:08:18.519 } 00:08:18.519 ] 00:08:18.519 }' 00:08:18.519 03:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.519 03:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.779 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.779 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.779 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.779 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.779 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.779 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.039 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.039 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.039 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.039 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.039 [2024-11-20 03:15:08.422413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.039 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.039 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.039 "name": "raid_bdev1", 00:08:19.039 "aliases": [ 00:08:19.039 "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6" 00:08:19.039 ], 00:08:19.039 "product_name": "Raid Volume", 00:08:19.039 "block_size": 512, 00:08:19.039 "num_blocks": 190464, 00:08:19.039 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:19.039 "assigned_rate_limits": { 00:08:19.039 "rw_ios_per_sec": 0, 00:08:19.039 "rw_mbytes_per_sec": 0, 00:08:19.039 "r_mbytes_per_sec": 0, 00:08:19.039 "w_mbytes_per_sec": 0 00:08:19.039 }, 00:08:19.039 "claimed": false, 00:08:19.039 "zoned": false, 00:08:19.039 "supported_io_types": { 00:08:19.039 "read": true, 00:08:19.039 "write": true, 00:08:19.039 "unmap": true, 00:08:19.039 "flush": true, 00:08:19.039 "reset": true, 00:08:19.039 "nvme_admin": false, 00:08:19.039 "nvme_io": false, 00:08:19.039 "nvme_io_md": false, 00:08:19.039 "write_zeroes": true, 00:08:19.039 "zcopy": false, 00:08:19.039 "get_zone_info": false, 00:08:19.039 "zone_management": false, 00:08:19.039 "zone_append": false, 00:08:19.039 "compare": false, 00:08:19.039 "compare_and_write": false, 00:08:19.039 "abort": false, 00:08:19.039 "seek_hole": false, 00:08:19.039 "seek_data": false, 00:08:19.040 "copy": false, 00:08:19.040 "nvme_iov_md": false 00:08:19.040 }, 00:08:19.040 "memory_domains": [ 00:08:19.040 { 00:08:19.040 "dma_device_id": "system", 00:08:19.040 "dma_device_type": 1 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.040 "dma_device_type": 2 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "dma_device_id": "system", 00:08:19.040 "dma_device_type": 1 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.040 "dma_device_type": 2 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "dma_device_id": "system", 00:08:19.040 "dma_device_type": 1 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.040 "dma_device_type": 2 00:08:19.040 } 00:08:19.040 ], 00:08:19.040 "driver_specific": { 00:08:19.040 "raid": { 00:08:19.040 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:19.040 "strip_size_kb": 64, 00:08:19.040 "state": "online", 00:08:19.040 "raid_level": "raid0", 00:08:19.040 "superblock": true, 00:08:19.040 "num_base_bdevs": 3, 00:08:19.040 "num_base_bdevs_discovered": 3, 00:08:19.040 "num_base_bdevs_operational": 3, 00:08:19.040 "base_bdevs_list": [ 00:08:19.040 { 00:08:19.040 "name": "pt1", 00:08:19.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.040 "is_configured": true, 00:08:19.040 "data_offset": 2048, 00:08:19.040 "data_size": 63488 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "name": "pt2", 00:08:19.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.040 "is_configured": true, 00:08:19.040 "data_offset": 2048, 00:08:19.040 "data_size": 63488 00:08:19.040 }, 00:08:19.040 { 00:08:19.040 "name": "pt3", 00:08:19.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.040 "is_configured": true, 00:08:19.040 "data_offset": 2048, 00:08:19.040 "data_size": 63488 00:08:19.040 } 00:08:19.040 ] 00:08:19.040 } 00:08:19.040 } 00:08:19.040 }' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.040 pt2 00:08:19.040 pt3' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.040 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 [2024-11-20 03:15:08.713846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6 ']' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 [2024-11-20 03:15:08.757474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.301 [2024-11-20 03:15:08.757506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.301 [2024-11-20 03:15:08.757601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.301 [2024-11-20 03:15:08.757677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.301 [2024-11-20 03:15:08.757688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.301 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.302 [2024-11-20 03:15:08.893284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.302 [2024-11-20 03:15:08.895196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.302 [2024-11-20 03:15:08.895258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:19.302 [2024-11-20 03:15:08.895309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.302 [2024-11-20 03:15:08.895361] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.302 [2024-11-20 03:15:08.895382] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:19.302 [2024-11-20 03:15:08.895400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.302 [2024-11-20 03:15:08.895412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:19.302 request: 00:08:19.302 { 00:08:19.302 "name": "raid_bdev1", 00:08:19.302 "raid_level": "raid0", 00:08:19.302 "base_bdevs": [ 00:08:19.302 "malloc1", 00:08:19.302 "malloc2", 00:08:19.302 "malloc3" 00:08:19.302 ], 00:08:19.302 "strip_size_kb": 64, 00:08:19.302 "superblock": false, 00:08:19.302 "method": "bdev_raid_create", 00:08:19.302 "req_id": 1 00:08:19.302 } 00:08:19.302 Got JSON-RPC error response 00:08:19.302 response: 00:08:19.302 { 00:08:19.302 "code": -17, 00:08:19.302 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.302 } 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.302 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 [2024-11-20 03:15:08.953130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.562 [2024-11-20 03:15:08.953185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.562 [2024-11-20 03:15:08.953204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:19.562 [2024-11-20 03:15:08.953213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.562 [2024-11-20 03:15:08.955500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.562 [2024-11-20 03:15:08.955538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.562 [2024-11-20 03:15:08.955630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.562 [2024-11-20 03:15:08.955686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.562 pt1 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.562 03:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.562 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.562 "name": "raid_bdev1", 00:08:19.562 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:19.562 "strip_size_kb": 64, 00:08:19.562 "state": "configuring", 00:08:19.562 "raid_level": "raid0", 00:08:19.562 "superblock": true, 00:08:19.562 "num_base_bdevs": 3, 00:08:19.562 "num_base_bdevs_discovered": 1, 00:08:19.562 "num_base_bdevs_operational": 3, 00:08:19.562 "base_bdevs_list": [ 00:08:19.562 { 00:08:19.562 "name": "pt1", 00:08:19.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.562 "is_configured": true, 00:08:19.562 "data_offset": 2048, 00:08:19.562 "data_size": 63488 00:08:19.562 }, 00:08:19.562 { 00:08:19.562 "name": null, 00:08:19.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.562 "is_configured": false, 00:08:19.562 "data_offset": 2048, 00:08:19.562 "data_size": 63488 00:08:19.562 }, 00:08:19.562 { 00:08:19.562 "name": null, 00:08:19.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.562 "is_configured": false, 00:08:19.562 "data_offset": 2048, 00:08:19.562 "data_size": 63488 00:08:19.562 } 00:08:19.562 ] 00:08:19.562 }' 00:08:19.562 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.562 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 [2024-11-20 03:15:09.384426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.823 [2024-11-20 03:15:09.384494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.823 [2024-11-20 03:15:09.384516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:19.823 [2024-11-20 03:15:09.384525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.823 [2024-11-20 03:15:09.385018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.823 [2024-11-20 03:15:09.385045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.823 [2024-11-20 03:15:09.385137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.823 [2024-11-20 03:15:09.385162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.823 pt2 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 [2024-11-20 03:15:09.392404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.823 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.824 "name": "raid_bdev1", 00:08:19.824 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:19.824 "strip_size_kb": 64, 00:08:19.824 "state": "configuring", 00:08:19.824 "raid_level": "raid0", 00:08:19.824 "superblock": true, 00:08:19.824 "num_base_bdevs": 3, 00:08:19.824 "num_base_bdevs_discovered": 1, 00:08:19.824 "num_base_bdevs_operational": 3, 00:08:19.824 "base_bdevs_list": [ 00:08:19.824 { 00:08:19.824 "name": "pt1", 00:08:19.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.824 "is_configured": true, 00:08:19.824 "data_offset": 2048, 00:08:19.824 "data_size": 63488 00:08:19.824 }, 00:08:19.824 { 00:08:19.824 "name": null, 00:08:19.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.824 "is_configured": false, 00:08:19.824 "data_offset": 0, 00:08:19.824 "data_size": 63488 00:08:19.824 }, 00:08:19.824 { 00:08:19.824 "name": null, 00:08:19.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.824 "is_configured": false, 00:08:19.824 "data_offset": 2048, 00:08:19.824 "data_size": 63488 00:08:19.824 } 00:08:19.824 ] 00:08:19.824 }' 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.824 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 [2024-11-20 03:15:09.795700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.395 [2024-11-20 03:15:09.795779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.395 [2024-11-20 03:15:09.795798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:20.395 [2024-11-20 03:15:09.795808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.395 [2024-11-20 03:15:09.796269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.395 [2024-11-20 03:15:09.796301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.395 [2024-11-20 03:15:09.796385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.395 [2024-11-20 03:15:09.796416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.395 pt2 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 [2024-11-20 03:15:09.807665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:20.395 [2024-11-20 03:15:09.807713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.395 [2024-11-20 03:15:09.807728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:20.395 [2024-11-20 03:15:09.807738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.395 [2024-11-20 03:15:09.808139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.395 [2024-11-20 03:15:09.808176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:20.395 [2024-11-20 03:15:09.808244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:20.395 [2024-11-20 03:15:09.808269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:20.395 [2024-11-20 03:15:09.808412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.395 [2024-11-20 03:15:09.808432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.395 [2024-11-20 03:15:09.808708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:20.395 [2024-11-20 03:15:09.808872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.395 [2024-11-20 03:15:09.808887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:20.395 [2024-11-20 03:15:09.809030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.395 pt3 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.395 "name": "raid_bdev1", 00:08:20.395 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:20.395 "strip_size_kb": 64, 00:08:20.395 "state": "online", 00:08:20.395 "raid_level": "raid0", 00:08:20.395 "superblock": true, 00:08:20.395 "num_base_bdevs": 3, 00:08:20.395 "num_base_bdevs_discovered": 3, 00:08:20.395 "num_base_bdevs_operational": 3, 00:08:20.395 "base_bdevs_list": [ 00:08:20.395 { 00:08:20.395 "name": "pt1", 00:08:20.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.395 "is_configured": true, 00:08:20.395 "data_offset": 2048, 00:08:20.395 "data_size": 63488 00:08:20.395 }, 00:08:20.395 { 00:08:20.395 "name": "pt2", 00:08:20.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.395 "is_configured": true, 00:08:20.395 "data_offset": 2048, 00:08:20.395 "data_size": 63488 00:08:20.395 }, 00:08:20.395 { 00:08:20.395 "name": "pt3", 00:08:20.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.395 "is_configured": true, 00:08:20.395 "data_offset": 2048, 00:08:20.395 "data_size": 63488 00:08:20.395 } 00:08:20.395 ] 00:08:20.395 }' 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.395 03:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.656 [2024-11-20 03:15:10.235305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.656 "name": "raid_bdev1", 00:08:20.656 "aliases": [ 00:08:20.656 "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6" 00:08:20.656 ], 00:08:20.656 "product_name": "Raid Volume", 00:08:20.656 "block_size": 512, 00:08:20.656 "num_blocks": 190464, 00:08:20.656 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:20.656 "assigned_rate_limits": { 00:08:20.656 "rw_ios_per_sec": 0, 00:08:20.656 "rw_mbytes_per_sec": 0, 00:08:20.656 "r_mbytes_per_sec": 0, 00:08:20.656 "w_mbytes_per_sec": 0 00:08:20.656 }, 00:08:20.656 "claimed": false, 00:08:20.656 "zoned": false, 00:08:20.656 "supported_io_types": { 00:08:20.656 "read": true, 00:08:20.656 "write": true, 00:08:20.656 "unmap": true, 00:08:20.656 "flush": true, 00:08:20.656 "reset": true, 00:08:20.656 "nvme_admin": false, 00:08:20.656 "nvme_io": false, 00:08:20.656 "nvme_io_md": false, 00:08:20.656 "write_zeroes": true, 00:08:20.656 "zcopy": false, 00:08:20.656 "get_zone_info": false, 00:08:20.656 "zone_management": false, 00:08:20.656 "zone_append": false, 00:08:20.656 "compare": false, 00:08:20.656 "compare_and_write": false, 00:08:20.656 "abort": false, 00:08:20.656 "seek_hole": false, 00:08:20.656 "seek_data": false, 00:08:20.656 "copy": false, 00:08:20.656 "nvme_iov_md": false 00:08:20.656 }, 00:08:20.656 "memory_domains": [ 00:08:20.656 { 00:08:20.656 "dma_device_id": "system", 00:08:20.656 "dma_device_type": 1 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.656 "dma_device_type": 2 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "dma_device_id": "system", 00:08:20.656 "dma_device_type": 1 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.656 "dma_device_type": 2 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "dma_device_id": "system", 00:08:20.656 "dma_device_type": 1 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.656 "dma_device_type": 2 00:08:20.656 } 00:08:20.656 ], 00:08:20.656 "driver_specific": { 00:08:20.656 "raid": { 00:08:20.656 "uuid": "2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6", 00:08:20.656 "strip_size_kb": 64, 00:08:20.656 "state": "online", 00:08:20.656 "raid_level": "raid0", 00:08:20.656 "superblock": true, 00:08:20.656 "num_base_bdevs": 3, 00:08:20.656 "num_base_bdevs_discovered": 3, 00:08:20.656 "num_base_bdevs_operational": 3, 00:08:20.656 "base_bdevs_list": [ 00:08:20.656 { 00:08:20.656 "name": "pt1", 00:08:20.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.656 "is_configured": true, 00:08:20.656 "data_offset": 2048, 00:08:20.656 "data_size": 63488 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "name": "pt2", 00:08:20.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.656 "is_configured": true, 00:08:20.656 "data_offset": 2048, 00:08:20.656 "data_size": 63488 00:08:20.656 }, 00:08:20.656 { 00:08:20.656 "name": "pt3", 00:08:20.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.656 "is_configured": true, 00:08:20.656 "data_offset": 2048, 00:08:20.656 "data_size": 63488 00:08:20.656 } 00:08:20.656 ] 00:08:20.656 } 00:08:20.656 } 00:08:20.656 }' 00:08:20.656 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.916 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.917 pt2 00:08:20.917 pt3' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.917 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.917 [2024-11-20 03:15:10.538755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6 '!=' 2e6ad5c8-2e06-4a47-a1d1-08c8bea664a6 ']' 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64925 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64925 ']' 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64925 00:08:21.177 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64925 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.178 killing process with pid 64925 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64925' 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64925 00:08:21.178 [2024-11-20 03:15:10.605105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.178 03:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64925 00:08:21.178 [2024-11-20 03:15:10.605215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.178 [2024-11-20 03:15:10.605279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.178 [2024-11-20 03:15:10.605292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.438 [2024-11-20 03:15:10.903873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.379 03:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:22.379 00:08:22.379 real 0m5.193s 00:08:22.379 user 0m7.471s 00:08:22.379 sys 0m0.894s 00:08:22.379 03:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.379 03:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.379 ************************************ 00:08:22.379 END TEST raid_superblock_test 00:08:22.379 ************************************ 00:08:22.639 03:15:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:22.639 03:15:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.639 03:15:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.639 03:15:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.639 ************************************ 00:08:22.639 START TEST raid_read_error_test 00:08:22.639 ************************************ 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o239wYt8JY 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65178 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65178 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65178 ']' 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.639 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.640 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.640 03:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.640 [2024-11-20 03:15:12.170767] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:22.640 [2024-11-20 03:15:12.170908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65178 ] 00:08:22.899 [2024-11-20 03:15:12.329548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.899 [2024-11-20 03:15:12.444813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.159 [2024-11-20 03:15:12.642658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.159 [2024-11-20 03:15:12.642727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.419 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.419 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.419 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.419 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.419 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.419 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 BaseBdev1_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 true 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 [2024-11-20 03:15:13.075601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.680 [2024-11-20 03:15:13.075667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.680 [2024-11-20 03:15:13.075702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.680 [2024-11-20 03:15:13.075713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.680 [2024-11-20 03:15:13.077809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.680 [2024-11-20 03:15:13.077846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.680 BaseBdev1 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 BaseBdev2_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 true 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 [2024-11-20 03:15:13.142576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.680 [2024-11-20 03:15:13.142661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.680 [2024-11-20 03:15:13.142680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.680 [2024-11-20 03:15:13.142691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.680 [2024-11-20 03:15:13.144880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.680 [2024-11-20 03:15:13.144917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.680 BaseBdev2 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 BaseBdev3_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 true 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 [2024-11-20 03:15:13.219306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:23.680 [2024-11-20 03:15:13.219365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.680 [2024-11-20 03:15:13.219401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:23.680 [2024-11-20 03:15:13.219413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.680 [2024-11-20 03:15:13.221506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.680 [2024-11-20 03:15:13.221547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:23.680 BaseBdev3 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.680 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 [2024-11-20 03:15:13.231335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.680 [2024-11-20 03:15:13.233155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.681 [2024-11-20 03:15:13.233236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.681 [2024-11-20 03:15:13.233424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:23.681 [2024-11-20 03:15:13.233438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:23.681 [2024-11-20 03:15:13.233725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:23.681 [2024-11-20 03:15:13.233895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:23.681 [2024-11-20 03:15:13.233915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:23.681 [2024-11-20 03:15:13.234091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.681 "name": "raid_bdev1", 00:08:23.681 "uuid": "ab3e6b2e-6f98-4bc3-9992-182161ce610f", 00:08:23.681 "strip_size_kb": 64, 00:08:23.681 "state": "online", 00:08:23.681 "raid_level": "raid0", 00:08:23.681 "superblock": true, 00:08:23.681 "num_base_bdevs": 3, 00:08:23.681 "num_base_bdevs_discovered": 3, 00:08:23.681 "num_base_bdevs_operational": 3, 00:08:23.681 "base_bdevs_list": [ 00:08:23.681 { 00:08:23.681 "name": "BaseBdev1", 00:08:23.681 "uuid": "9cdd6a4c-2623-513a-8cd2-75db966cc464", 00:08:23.681 "is_configured": true, 00:08:23.681 "data_offset": 2048, 00:08:23.681 "data_size": 63488 00:08:23.681 }, 00:08:23.681 { 00:08:23.681 "name": "BaseBdev2", 00:08:23.681 "uuid": "ab64c087-e39b-5feb-93e4-161f844554ea", 00:08:23.681 "is_configured": true, 00:08:23.681 "data_offset": 2048, 00:08:23.681 "data_size": 63488 00:08:23.681 }, 00:08:23.681 { 00:08:23.681 "name": "BaseBdev3", 00:08:23.681 "uuid": "28ccc229-0a5c-5ba8-bb9c-9a855c7a8c2c", 00:08:23.681 "is_configured": true, 00:08:23.681 "data_offset": 2048, 00:08:23.681 "data_size": 63488 00:08:23.681 } 00:08:23.681 ] 00:08:23.681 }' 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.681 03:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.250 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.250 03:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:24.250 [2024-11-20 03:15:13.763768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.188 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.189 "name": "raid_bdev1", 00:08:25.189 "uuid": "ab3e6b2e-6f98-4bc3-9992-182161ce610f", 00:08:25.189 "strip_size_kb": 64, 00:08:25.189 "state": "online", 00:08:25.189 "raid_level": "raid0", 00:08:25.189 "superblock": true, 00:08:25.189 "num_base_bdevs": 3, 00:08:25.189 "num_base_bdevs_discovered": 3, 00:08:25.189 "num_base_bdevs_operational": 3, 00:08:25.189 "base_bdevs_list": [ 00:08:25.189 { 00:08:25.189 "name": "BaseBdev1", 00:08:25.189 "uuid": "9cdd6a4c-2623-513a-8cd2-75db966cc464", 00:08:25.189 "is_configured": true, 00:08:25.189 "data_offset": 2048, 00:08:25.189 "data_size": 63488 00:08:25.189 }, 00:08:25.189 { 00:08:25.189 "name": "BaseBdev2", 00:08:25.189 "uuid": "ab64c087-e39b-5feb-93e4-161f844554ea", 00:08:25.189 "is_configured": true, 00:08:25.189 "data_offset": 2048, 00:08:25.189 "data_size": 63488 00:08:25.189 }, 00:08:25.189 { 00:08:25.189 "name": "BaseBdev3", 00:08:25.189 "uuid": "28ccc229-0a5c-5ba8-bb9c-9a855c7a8c2c", 00:08:25.189 "is_configured": true, 00:08:25.189 "data_offset": 2048, 00:08:25.189 "data_size": 63488 00:08:25.189 } 00:08:25.189 ] 00:08:25.189 }' 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.189 03:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.758 [2024-11-20 03:15:15.127815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.758 [2024-11-20 03:15:15.127850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.758 [2024-11-20 03:15:15.130486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.758 [2024-11-20 03:15:15.130532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.758 [2024-11-20 03:15:15.130576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.758 [2024-11-20 03:15:15.130590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:25.758 { 00:08:25.758 "results": [ 00:08:25.758 { 00:08:25.758 "job": "raid_bdev1", 00:08:25.758 "core_mask": "0x1", 00:08:25.758 "workload": "randrw", 00:08:25.758 "percentage": 50, 00:08:25.758 "status": "finished", 00:08:25.758 "queue_depth": 1, 00:08:25.758 "io_size": 131072, 00:08:25.758 "runtime": 1.364925, 00:08:25.758 "iops": 15781.086872905105, 00:08:25.758 "mibps": 1972.6358591131382, 00:08:25.758 "io_failed": 1, 00:08:25.758 "io_timeout": 0, 00:08:25.758 "avg_latency_us": 88.04996633818438, 00:08:25.758 "min_latency_us": 26.270742358078603, 00:08:25.758 "max_latency_us": 1445.2262008733624 00:08:25.758 } 00:08:25.758 ], 00:08:25.758 "core_count": 1 00:08:25.758 } 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65178 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65178 ']' 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65178 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65178 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.758 killing process with pid 65178 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65178' 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65178 00:08:25.758 [2024-11-20 03:15:15.163301] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.758 03:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65178 00:08:26.017 [2024-11-20 03:15:15.391789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o239wYt8JY 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:26.957 00:08:26.957 real 0m4.496s 00:08:26.957 user 0m5.362s 00:08:26.957 sys 0m0.543s 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.957 03:15:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.957 ************************************ 00:08:26.957 END TEST raid_read_error_test 00:08:26.957 ************************************ 00:08:27.216 03:15:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:27.216 03:15:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:27.217 03:15:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.217 03:15:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.217 ************************************ 00:08:27.217 START TEST raid_write_error_test 00:08:27.217 ************************************ 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i37dk6xkvq 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65318 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65318 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65318 ']' 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.217 03:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.217 [2024-11-20 03:15:16.737623] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:27.217 [2024-11-20 03:15:16.737767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65318 ] 00:08:27.477 [2024-11-20 03:15:16.912848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.477 [2024-11-20 03:15:17.030758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.736 [2024-11-20 03:15:17.243039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.736 [2024-11-20 03:15:17.243106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 BaseBdev1_malloc 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 true 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.996 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 [2024-11-20 03:15:17.634823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:28.257 [2024-11-20 03:15:17.634881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.257 [2024-11-20 03:15:17.634902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:28.257 [2024-11-20 03:15:17.634913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.257 [2024-11-20 03:15:17.637135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.257 [2024-11-20 03:15:17.637178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.257 BaseBdev1 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 BaseBdev2_malloc 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 true 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 [2024-11-20 03:15:17.700418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:28.257 [2024-11-20 03:15:17.700479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.257 [2024-11-20 03:15:17.700513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:28.257 [2024-11-20 03:15:17.700524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.257 [2024-11-20 03:15:17.702752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.257 [2024-11-20 03:15:17.702790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:28.257 BaseBdev2 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 BaseBdev3_malloc 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 true 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 [2024-11-20 03:15:17.782974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:28.257 [2024-11-20 03:15:17.783047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.257 [2024-11-20 03:15:17.783066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:28.257 [2024-11-20 03:15:17.783079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.257 [2024-11-20 03:15:17.785274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.257 [2024-11-20 03:15:17.785315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:28.257 BaseBdev3 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:28.257 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.258 [2024-11-20 03:15:17.795077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.258 [2024-11-20 03:15:17.797066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.258 [2024-11-20 03:15:17.797160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.258 [2024-11-20 03:15:17.797361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:28.258 [2024-11-20 03:15:17.797394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.258 [2024-11-20 03:15:17.797729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:28.258 [2024-11-20 03:15:17.797910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:28.258 [2024-11-20 03:15:17.797933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:28.258 [2024-11-20 03:15:17.798116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.258 "name": "raid_bdev1", 00:08:28.258 "uuid": "79e379d7-7a0b-44a5-aeb5-a963e6c2373f", 00:08:28.258 "strip_size_kb": 64, 00:08:28.258 "state": "online", 00:08:28.258 "raid_level": "raid0", 00:08:28.258 "superblock": true, 00:08:28.258 "num_base_bdevs": 3, 00:08:28.258 "num_base_bdevs_discovered": 3, 00:08:28.258 "num_base_bdevs_operational": 3, 00:08:28.258 "base_bdevs_list": [ 00:08:28.258 { 00:08:28.258 "name": "BaseBdev1", 00:08:28.258 "uuid": "7a1aa66a-c676-5b07-a406-69350089ddb4", 00:08:28.258 "is_configured": true, 00:08:28.258 "data_offset": 2048, 00:08:28.258 "data_size": 63488 00:08:28.258 }, 00:08:28.258 { 00:08:28.258 "name": "BaseBdev2", 00:08:28.258 "uuid": "1e7c8a96-e7d3-573e-8ee3-d4fd3fe01bd9", 00:08:28.258 "is_configured": true, 00:08:28.258 "data_offset": 2048, 00:08:28.258 "data_size": 63488 00:08:28.258 }, 00:08:28.258 { 00:08:28.258 "name": "BaseBdev3", 00:08:28.258 "uuid": "9a3e5830-26ee-5729-a8f3-8253a1e797b4", 00:08:28.258 "is_configured": true, 00:08:28.258 "data_offset": 2048, 00:08:28.258 "data_size": 63488 00:08:28.258 } 00:08:28.258 ] 00:08:28.258 }' 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.258 03:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.828 03:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:28.828 03:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:28.828 [2024-11-20 03:15:18.299473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:29.771 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:29.771 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.771 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.771 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.771 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:29.771 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.772 "name": "raid_bdev1", 00:08:29.772 "uuid": "79e379d7-7a0b-44a5-aeb5-a963e6c2373f", 00:08:29.772 "strip_size_kb": 64, 00:08:29.772 "state": "online", 00:08:29.772 "raid_level": "raid0", 00:08:29.772 "superblock": true, 00:08:29.772 "num_base_bdevs": 3, 00:08:29.772 "num_base_bdevs_discovered": 3, 00:08:29.772 "num_base_bdevs_operational": 3, 00:08:29.772 "base_bdevs_list": [ 00:08:29.772 { 00:08:29.772 "name": "BaseBdev1", 00:08:29.772 "uuid": "7a1aa66a-c676-5b07-a406-69350089ddb4", 00:08:29.772 "is_configured": true, 00:08:29.772 "data_offset": 2048, 00:08:29.772 "data_size": 63488 00:08:29.772 }, 00:08:29.772 { 00:08:29.772 "name": "BaseBdev2", 00:08:29.772 "uuid": "1e7c8a96-e7d3-573e-8ee3-d4fd3fe01bd9", 00:08:29.772 "is_configured": true, 00:08:29.772 "data_offset": 2048, 00:08:29.772 "data_size": 63488 00:08:29.772 }, 00:08:29.772 { 00:08:29.772 "name": "BaseBdev3", 00:08:29.772 "uuid": "9a3e5830-26ee-5729-a8f3-8253a1e797b4", 00:08:29.772 "is_configured": true, 00:08:29.772 "data_offset": 2048, 00:08:29.772 "data_size": 63488 00:08:29.772 } 00:08:29.772 ] 00:08:29.772 }' 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.772 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.342 [2024-11-20 03:15:19.683489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.342 [2024-11-20 03:15:19.683524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.342 [2024-11-20 03:15:19.686162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.342 [2024-11-20 03:15:19.686209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.342 [2024-11-20 03:15:19.686247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.342 [2024-11-20 03:15:19.686256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:30.342 { 00:08:30.342 "results": [ 00:08:30.342 { 00:08:30.342 "job": "raid_bdev1", 00:08:30.342 "core_mask": "0x1", 00:08:30.342 "workload": "randrw", 00:08:30.342 "percentage": 50, 00:08:30.342 "status": "finished", 00:08:30.342 "queue_depth": 1, 00:08:30.342 "io_size": 131072, 00:08:30.342 "runtime": 1.384903, 00:08:30.342 "iops": 15976.570200223408, 00:08:30.342 "mibps": 1997.071275027926, 00:08:30.342 "io_failed": 1, 00:08:30.342 "io_timeout": 0, 00:08:30.342 "avg_latency_us": 87.00848578955585, 00:08:30.342 "min_latency_us": 26.606113537117903, 00:08:30.342 "max_latency_us": 1445.2262008733624 00:08:30.342 } 00:08:30.342 ], 00:08:30.342 "core_count": 1 00:08:30.342 } 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65318 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65318 ']' 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65318 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65318 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65318' 00:08:30.342 killing process with pid 65318 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65318 00:08:30.342 [2024-11-20 03:15:19.733962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.342 03:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65318 00:08:30.342 [2024-11-20 03:15:19.963650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i37dk6xkvq 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:31.725 00:08:31.725 real 0m4.496s 00:08:31.725 user 0m5.306s 00:08:31.725 sys 0m0.558s 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.725 03:15:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.725 ************************************ 00:08:31.725 END TEST raid_write_error_test 00:08:31.725 ************************************ 00:08:31.725 03:15:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:31.725 03:15:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:31.725 03:15:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.725 03:15:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.725 03:15:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.725 ************************************ 00:08:31.725 START TEST raid_state_function_test 00:08:31.725 ************************************ 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65462 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65462' 00:08:31.725 Process raid pid: 65462 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65462 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65462 ']' 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.725 03:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.725 [2024-11-20 03:15:21.299303] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:31.725 [2024-11-20 03:15:21.299476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.985 [2024-11-20 03:15:21.475925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.985 [2024-11-20 03:15:21.591136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.245 [2024-11-20 03:15:21.788109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.245 [2024-11-20 03:15:21.788156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.506 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.506 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.506 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.506 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.506 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.767 [2024-11-20 03:15:22.141314] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.767 [2024-11-20 03:15:22.141372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.767 [2024-11-20 03:15:22.141383] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.767 [2024-11-20 03:15:22.141393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.767 [2024-11-20 03:15:22.141399] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.767 [2024-11-20 03:15:22.141408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.767 "name": "Existed_Raid", 00:08:32.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.767 "strip_size_kb": 64, 00:08:32.767 "state": "configuring", 00:08:32.767 "raid_level": "concat", 00:08:32.767 "superblock": false, 00:08:32.767 "num_base_bdevs": 3, 00:08:32.767 "num_base_bdevs_discovered": 0, 00:08:32.767 "num_base_bdevs_operational": 3, 00:08:32.767 "base_bdevs_list": [ 00:08:32.767 { 00:08:32.767 "name": "BaseBdev1", 00:08:32.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.767 "is_configured": false, 00:08:32.767 "data_offset": 0, 00:08:32.767 "data_size": 0 00:08:32.767 }, 00:08:32.767 { 00:08:32.767 "name": "BaseBdev2", 00:08:32.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.767 "is_configured": false, 00:08:32.767 "data_offset": 0, 00:08:32.767 "data_size": 0 00:08:32.767 }, 00:08:32.767 { 00:08:32.767 "name": "BaseBdev3", 00:08:32.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.767 "is_configured": false, 00:08:32.767 "data_offset": 0, 00:08:32.767 "data_size": 0 00:08:32.767 } 00:08:32.767 ] 00:08:32.767 }' 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.767 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.028 [2024-11-20 03:15:22.600480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.028 [2024-11-20 03:15:22.600526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.028 [2024-11-20 03:15:22.612460] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.028 [2024-11-20 03:15:22.612519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.028 [2024-11-20 03:15:22.612528] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.028 [2024-11-20 03:15:22.612538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.028 [2024-11-20 03:15:22.612544] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.028 [2024-11-20 03:15:22.612553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.028 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.296 [2024-11-20 03:15:22.661065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.296 BaseBdev1 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.296 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.296 [ 00:08:33.296 { 00:08:33.296 "name": "BaseBdev1", 00:08:33.296 "aliases": [ 00:08:33.296 "0857e559-92fe-4b2f-b69e-8793b0a6f8c9" 00:08:33.296 ], 00:08:33.296 "product_name": "Malloc disk", 00:08:33.296 "block_size": 512, 00:08:33.296 "num_blocks": 65536, 00:08:33.296 "uuid": "0857e559-92fe-4b2f-b69e-8793b0a6f8c9", 00:08:33.296 "assigned_rate_limits": { 00:08:33.296 "rw_ios_per_sec": 0, 00:08:33.296 "rw_mbytes_per_sec": 0, 00:08:33.296 "r_mbytes_per_sec": 0, 00:08:33.296 "w_mbytes_per_sec": 0 00:08:33.296 }, 00:08:33.296 "claimed": true, 00:08:33.296 "claim_type": "exclusive_write", 00:08:33.296 "zoned": false, 00:08:33.296 "supported_io_types": { 00:08:33.296 "read": true, 00:08:33.296 "write": true, 00:08:33.296 "unmap": true, 00:08:33.296 "flush": true, 00:08:33.296 "reset": true, 00:08:33.296 "nvme_admin": false, 00:08:33.296 "nvme_io": false, 00:08:33.296 "nvme_io_md": false, 00:08:33.296 "write_zeroes": true, 00:08:33.296 "zcopy": true, 00:08:33.296 "get_zone_info": false, 00:08:33.296 "zone_management": false, 00:08:33.296 "zone_append": false, 00:08:33.296 "compare": false, 00:08:33.297 "compare_and_write": false, 00:08:33.297 "abort": true, 00:08:33.297 "seek_hole": false, 00:08:33.297 "seek_data": false, 00:08:33.297 "copy": true, 00:08:33.297 "nvme_iov_md": false 00:08:33.297 }, 00:08:33.297 "memory_domains": [ 00:08:33.297 { 00:08:33.297 "dma_device_id": "system", 00:08:33.297 "dma_device_type": 1 00:08:33.297 }, 00:08:33.297 { 00:08:33.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.297 "dma_device_type": 2 00:08:33.297 } 00:08:33.297 ], 00:08:33.297 "driver_specific": {} 00:08:33.297 } 00:08:33.297 ] 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.297 "name": "Existed_Raid", 00:08:33.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.297 "strip_size_kb": 64, 00:08:33.297 "state": "configuring", 00:08:33.297 "raid_level": "concat", 00:08:33.297 "superblock": false, 00:08:33.297 "num_base_bdevs": 3, 00:08:33.297 "num_base_bdevs_discovered": 1, 00:08:33.297 "num_base_bdevs_operational": 3, 00:08:33.297 "base_bdevs_list": [ 00:08:33.297 { 00:08:33.297 "name": "BaseBdev1", 00:08:33.297 "uuid": "0857e559-92fe-4b2f-b69e-8793b0a6f8c9", 00:08:33.297 "is_configured": true, 00:08:33.297 "data_offset": 0, 00:08:33.297 "data_size": 65536 00:08:33.297 }, 00:08:33.297 { 00:08:33.297 "name": "BaseBdev2", 00:08:33.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.297 "is_configured": false, 00:08:33.297 "data_offset": 0, 00:08:33.297 "data_size": 0 00:08:33.297 }, 00:08:33.297 { 00:08:33.297 "name": "BaseBdev3", 00:08:33.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.297 "is_configured": false, 00:08:33.297 "data_offset": 0, 00:08:33.297 "data_size": 0 00:08:33.297 } 00:08:33.297 ] 00:08:33.297 }' 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.297 03:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.572 [2024-11-20 03:15:23.168266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.572 [2024-11-20 03:15:23.168331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.572 [2024-11-20 03:15:23.180296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.572 [2024-11-20 03:15:23.182173] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.572 [2024-11-20 03:15:23.182220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.572 [2024-11-20 03:15:23.182230] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.572 [2024-11-20 03:15:23.182238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.572 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.832 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.832 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.832 "name": "Existed_Raid", 00:08:33.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.832 "strip_size_kb": 64, 00:08:33.832 "state": "configuring", 00:08:33.832 "raid_level": "concat", 00:08:33.832 "superblock": false, 00:08:33.832 "num_base_bdevs": 3, 00:08:33.832 "num_base_bdevs_discovered": 1, 00:08:33.832 "num_base_bdevs_operational": 3, 00:08:33.832 "base_bdevs_list": [ 00:08:33.832 { 00:08:33.832 "name": "BaseBdev1", 00:08:33.832 "uuid": "0857e559-92fe-4b2f-b69e-8793b0a6f8c9", 00:08:33.832 "is_configured": true, 00:08:33.832 "data_offset": 0, 00:08:33.832 "data_size": 65536 00:08:33.832 }, 00:08:33.832 { 00:08:33.832 "name": "BaseBdev2", 00:08:33.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.832 "is_configured": false, 00:08:33.832 "data_offset": 0, 00:08:33.832 "data_size": 0 00:08:33.832 }, 00:08:33.832 { 00:08:33.832 "name": "BaseBdev3", 00:08:33.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.832 "is_configured": false, 00:08:33.832 "data_offset": 0, 00:08:33.832 "data_size": 0 00:08:33.832 } 00:08:33.832 ] 00:08:33.832 }' 00:08:33.832 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.832 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.091 [2024-11-20 03:15:23.649929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.091 BaseBdev2 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.091 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.091 [ 00:08:34.091 { 00:08:34.091 "name": "BaseBdev2", 00:08:34.091 "aliases": [ 00:08:34.091 "ecc014d8-4033-4760-a540-76a3c2f92f0a" 00:08:34.091 ], 00:08:34.091 "product_name": "Malloc disk", 00:08:34.091 "block_size": 512, 00:08:34.091 "num_blocks": 65536, 00:08:34.091 "uuid": "ecc014d8-4033-4760-a540-76a3c2f92f0a", 00:08:34.091 "assigned_rate_limits": { 00:08:34.091 "rw_ios_per_sec": 0, 00:08:34.091 "rw_mbytes_per_sec": 0, 00:08:34.091 "r_mbytes_per_sec": 0, 00:08:34.091 "w_mbytes_per_sec": 0 00:08:34.091 }, 00:08:34.091 "claimed": true, 00:08:34.092 "claim_type": "exclusive_write", 00:08:34.092 "zoned": false, 00:08:34.092 "supported_io_types": { 00:08:34.092 "read": true, 00:08:34.092 "write": true, 00:08:34.092 "unmap": true, 00:08:34.092 "flush": true, 00:08:34.092 "reset": true, 00:08:34.092 "nvme_admin": false, 00:08:34.092 "nvme_io": false, 00:08:34.092 "nvme_io_md": false, 00:08:34.092 "write_zeroes": true, 00:08:34.092 "zcopy": true, 00:08:34.092 "get_zone_info": false, 00:08:34.092 "zone_management": false, 00:08:34.092 "zone_append": false, 00:08:34.092 "compare": false, 00:08:34.092 "compare_and_write": false, 00:08:34.092 "abort": true, 00:08:34.092 "seek_hole": false, 00:08:34.092 "seek_data": false, 00:08:34.092 "copy": true, 00:08:34.092 "nvme_iov_md": false 00:08:34.092 }, 00:08:34.092 "memory_domains": [ 00:08:34.092 { 00:08:34.092 "dma_device_id": "system", 00:08:34.092 "dma_device_type": 1 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.092 "dma_device_type": 2 00:08:34.092 } 00:08:34.092 ], 00:08:34.092 "driver_specific": {} 00:08:34.092 } 00:08:34.092 ] 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.092 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.351 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.351 "name": "Existed_Raid", 00:08:34.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.351 "strip_size_kb": 64, 00:08:34.351 "state": "configuring", 00:08:34.351 "raid_level": "concat", 00:08:34.351 "superblock": false, 00:08:34.351 "num_base_bdevs": 3, 00:08:34.351 "num_base_bdevs_discovered": 2, 00:08:34.351 "num_base_bdevs_operational": 3, 00:08:34.351 "base_bdevs_list": [ 00:08:34.351 { 00:08:34.351 "name": "BaseBdev1", 00:08:34.351 "uuid": "0857e559-92fe-4b2f-b69e-8793b0a6f8c9", 00:08:34.351 "is_configured": true, 00:08:34.351 "data_offset": 0, 00:08:34.351 "data_size": 65536 00:08:34.351 }, 00:08:34.351 { 00:08:34.351 "name": "BaseBdev2", 00:08:34.351 "uuid": "ecc014d8-4033-4760-a540-76a3c2f92f0a", 00:08:34.351 "is_configured": true, 00:08:34.351 "data_offset": 0, 00:08:34.351 "data_size": 65536 00:08:34.351 }, 00:08:34.351 { 00:08:34.351 "name": "BaseBdev3", 00:08:34.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.351 "is_configured": false, 00:08:34.351 "data_offset": 0, 00:08:34.351 "data_size": 0 00:08:34.352 } 00:08:34.352 ] 00:08:34.352 }' 00:08:34.352 03:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.352 03:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.611 [2024-11-20 03:15:24.194160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.611 [2024-11-20 03:15:24.194309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:34.611 [2024-11-20 03:15:24.194328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:34.611 [2024-11-20 03:15:24.194685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:34.611 [2024-11-20 03:15:24.194869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:34.611 [2024-11-20 03:15:24.194881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:34.611 [2024-11-20 03:15:24.195195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.611 BaseBdev3 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.611 [ 00:08:34.611 { 00:08:34.611 "name": "BaseBdev3", 00:08:34.611 "aliases": [ 00:08:34.611 "16a6e42e-5cb6-4a78-b82c-b902ae51b59a" 00:08:34.611 ], 00:08:34.611 "product_name": "Malloc disk", 00:08:34.611 "block_size": 512, 00:08:34.611 "num_blocks": 65536, 00:08:34.611 "uuid": "16a6e42e-5cb6-4a78-b82c-b902ae51b59a", 00:08:34.611 "assigned_rate_limits": { 00:08:34.611 "rw_ios_per_sec": 0, 00:08:34.611 "rw_mbytes_per_sec": 0, 00:08:34.611 "r_mbytes_per_sec": 0, 00:08:34.611 "w_mbytes_per_sec": 0 00:08:34.611 }, 00:08:34.611 "claimed": true, 00:08:34.611 "claim_type": "exclusive_write", 00:08:34.611 "zoned": false, 00:08:34.611 "supported_io_types": { 00:08:34.611 "read": true, 00:08:34.611 "write": true, 00:08:34.611 "unmap": true, 00:08:34.611 "flush": true, 00:08:34.611 "reset": true, 00:08:34.611 "nvme_admin": false, 00:08:34.611 "nvme_io": false, 00:08:34.611 "nvme_io_md": false, 00:08:34.611 "write_zeroes": true, 00:08:34.611 "zcopy": true, 00:08:34.611 "get_zone_info": false, 00:08:34.611 "zone_management": false, 00:08:34.611 "zone_append": false, 00:08:34.611 "compare": false, 00:08:34.611 "compare_and_write": false, 00:08:34.611 "abort": true, 00:08:34.611 "seek_hole": false, 00:08:34.611 "seek_data": false, 00:08:34.611 "copy": true, 00:08:34.611 "nvme_iov_md": false 00:08:34.611 }, 00:08:34.611 "memory_domains": [ 00:08:34.611 { 00:08:34.611 "dma_device_id": "system", 00:08:34.611 "dma_device_type": 1 00:08:34.611 }, 00:08:34.611 { 00:08:34.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.611 "dma_device_type": 2 00:08:34.611 } 00:08:34.611 ], 00:08:34.611 "driver_specific": {} 00:08:34.611 } 00:08:34.611 ] 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.611 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.871 "name": "Existed_Raid", 00:08:34.871 "uuid": "a491246a-e342-48d5-8470-027596b341d5", 00:08:34.871 "strip_size_kb": 64, 00:08:34.871 "state": "online", 00:08:34.871 "raid_level": "concat", 00:08:34.871 "superblock": false, 00:08:34.871 "num_base_bdevs": 3, 00:08:34.871 "num_base_bdevs_discovered": 3, 00:08:34.871 "num_base_bdevs_operational": 3, 00:08:34.871 "base_bdevs_list": [ 00:08:34.871 { 00:08:34.871 "name": "BaseBdev1", 00:08:34.871 "uuid": "0857e559-92fe-4b2f-b69e-8793b0a6f8c9", 00:08:34.871 "is_configured": true, 00:08:34.871 "data_offset": 0, 00:08:34.871 "data_size": 65536 00:08:34.871 }, 00:08:34.871 { 00:08:34.871 "name": "BaseBdev2", 00:08:34.871 "uuid": "ecc014d8-4033-4760-a540-76a3c2f92f0a", 00:08:34.871 "is_configured": true, 00:08:34.871 "data_offset": 0, 00:08:34.871 "data_size": 65536 00:08:34.871 }, 00:08:34.871 { 00:08:34.871 "name": "BaseBdev3", 00:08:34.871 "uuid": "16a6e42e-5cb6-4a78-b82c-b902ae51b59a", 00:08:34.871 "is_configured": true, 00:08:34.871 "data_offset": 0, 00:08:34.871 "data_size": 65536 00:08:34.871 } 00:08:34.871 ] 00:08:34.871 }' 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.871 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.130 [2024-11-20 03:15:24.689794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.130 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.130 "name": "Existed_Raid", 00:08:35.130 "aliases": [ 00:08:35.130 "a491246a-e342-48d5-8470-027596b341d5" 00:08:35.130 ], 00:08:35.130 "product_name": "Raid Volume", 00:08:35.130 "block_size": 512, 00:08:35.130 "num_blocks": 196608, 00:08:35.130 "uuid": "a491246a-e342-48d5-8470-027596b341d5", 00:08:35.130 "assigned_rate_limits": { 00:08:35.130 "rw_ios_per_sec": 0, 00:08:35.130 "rw_mbytes_per_sec": 0, 00:08:35.130 "r_mbytes_per_sec": 0, 00:08:35.130 "w_mbytes_per_sec": 0 00:08:35.130 }, 00:08:35.130 "claimed": false, 00:08:35.130 "zoned": false, 00:08:35.130 "supported_io_types": { 00:08:35.130 "read": true, 00:08:35.130 "write": true, 00:08:35.130 "unmap": true, 00:08:35.130 "flush": true, 00:08:35.130 "reset": true, 00:08:35.130 "nvme_admin": false, 00:08:35.130 "nvme_io": false, 00:08:35.130 "nvme_io_md": false, 00:08:35.130 "write_zeroes": true, 00:08:35.130 "zcopy": false, 00:08:35.130 "get_zone_info": false, 00:08:35.130 "zone_management": false, 00:08:35.130 "zone_append": false, 00:08:35.130 "compare": false, 00:08:35.130 "compare_and_write": false, 00:08:35.130 "abort": false, 00:08:35.131 "seek_hole": false, 00:08:35.131 "seek_data": false, 00:08:35.131 "copy": false, 00:08:35.131 "nvme_iov_md": false 00:08:35.131 }, 00:08:35.131 "memory_domains": [ 00:08:35.131 { 00:08:35.131 "dma_device_id": "system", 00:08:35.131 "dma_device_type": 1 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.131 "dma_device_type": 2 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "dma_device_id": "system", 00:08:35.131 "dma_device_type": 1 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.131 "dma_device_type": 2 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "dma_device_id": "system", 00:08:35.131 "dma_device_type": 1 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.131 "dma_device_type": 2 00:08:35.131 } 00:08:35.131 ], 00:08:35.131 "driver_specific": { 00:08:35.131 "raid": { 00:08:35.131 "uuid": "a491246a-e342-48d5-8470-027596b341d5", 00:08:35.131 "strip_size_kb": 64, 00:08:35.131 "state": "online", 00:08:35.131 "raid_level": "concat", 00:08:35.131 "superblock": false, 00:08:35.131 "num_base_bdevs": 3, 00:08:35.131 "num_base_bdevs_discovered": 3, 00:08:35.131 "num_base_bdevs_operational": 3, 00:08:35.131 "base_bdevs_list": [ 00:08:35.131 { 00:08:35.131 "name": "BaseBdev1", 00:08:35.131 "uuid": "0857e559-92fe-4b2f-b69e-8793b0a6f8c9", 00:08:35.131 "is_configured": true, 00:08:35.131 "data_offset": 0, 00:08:35.131 "data_size": 65536 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "name": "BaseBdev2", 00:08:35.131 "uuid": "ecc014d8-4033-4760-a540-76a3c2f92f0a", 00:08:35.131 "is_configured": true, 00:08:35.131 "data_offset": 0, 00:08:35.131 "data_size": 65536 00:08:35.131 }, 00:08:35.131 { 00:08:35.131 "name": "BaseBdev3", 00:08:35.131 "uuid": "16a6e42e-5cb6-4a78-b82c-b902ae51b59a", 00:08:35.131 "is_configured": true, 00:08:35.131 "data_offset": 0, 00:08:35.131 "data_size": 65536 00:08:35.131 } 00:08:35.131 ] 00:08:35.131 } 00:08:35.131 } 00:08:35.131 }' 00:08:35.131 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.131 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.131 BaseBdev2 00:08:35.131 BaseBdev3' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.391 03:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 [2024-11-20 03:15:24.949051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.391 [2024-11-20 03:15:24.949082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.391 [2024-11-20 03:15:24.949139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.650 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.650 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.650 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:35.650 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.651 "name": "Existed_Raid", 00:08:35.651 "uuid": "a491246a-e342-48d5-8470-027596b341d5", 00:08:35.651 "strip_size_kb": 64, 00:08:35.651 "state": "offline", 00:08:35.651 "raid_level": "concat", 00:08:35.651 "superblock": false, 00:08:35.651 "num_base_bdevs": 3, 00:08:35.651 "num_base_bdevs_discovered": 2, 00:08:35.651 "num_base_bdevs_operational": 2, 00:08:35.651 "base_bdevs_list": [ 00:08:35.651 { 00:08:35.651 "name": null, 00:08:35.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.651 "is_configured": false, 00:08:35.651 "data_offset": 0, 00:08:35.651 "data_size": 65536 00:08:35.651 }, 00:08:35.651 { 00:08:35.651 "name": "BaseBdev2", 00:08:35.651 "uuid": "ecc014d8-4033-4760-a540-76a3c2f92f0a", 00:08:35.651 "is_configured": true, 00:08:35.651 "data_offset": 0, 00:08:35.651 "data_size": 65536 00:08:35.651 }, 00:08:35.651 { 00:08:35.651 "name": "BaseBdev3", 00:08:35.651 "uuid": "16a6e42e-5cb6-4a78-b82c-b902ae51b59a", 00:08:35.651 "is_configured": true, 00:08:35.651 "data_offset": 0, 00:08:35.651 "data_size": 65536 00:08:35.651 } 00:08:35.651 ] 00:08:35.651 }' 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.651 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.911 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.171 [2024-11-20 03:15:25.551958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.171 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.171 [2024-11-20 03:15:25.706330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.171 [2024-11-20 03:15:25.706387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 BaseBdev2 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 [ 00:08:36.432 { 00:08:36.432 "name": "BaseBdev2", 00:08:36.432 "aliases": [ 00:08:36.432 "6ab79ea3-b254-4da7-98d2-82af26674af9" 00:08:36.432 ], 00:08:36.432 "product_name": "Malloc disk", 00:08:36.432 "block_size": 512, 00:08:36.432 "num_blocks": 65536, 00:08:36.432 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:36.432 "assigned_rate_limits": { 00:08:36.432 "rw_ios_per_sec": 0, 00:08:36.432 "rw_mbytes_per_sec": 0, 00:08:36.432 "r_mbytes_per_sec": 0, 00:08:36.432 "w_mbytes_per_sec": 0 00:08:36.432 }, 00:08:36.432 "claimed": false, 00:08:36.432 "zoned": false, 00:08:36.432 "supported_io_types": { 00:08:36.432 "read": true, 00:08:36.432 "write": true, 00:08:36.432 "unmap": true, 00:08:36.432 "flush": true, 00:08:36.432 "reset": true, 00:08:36.432 "nvme_admin": false, 00:08:36.432 "nvme_io": false, 00:08:36.432 "nvme_io_md": false, 00:08:36.432 "write_zeroes": true, 00:08:36.432 "zcopy": true, 00:08:36.432 "get_zone_info": false, 00:08:36.432 "zone_management": false, 00:08:36.432 "zone_append": false, 00:08:36.432 "compare": false, 00:08:36.432 "compare_and_write": false, 00:08:36.432 "abort": true, 00:08:36.432 "seek_hole": false, 00:08:36.432 "seek_data": false, 00:08:36.432 "copy": true, 00:08:36.432 "nvme_iov_md": false 00:08:36.432 }, 00:08:36.432 "memory_domains": [ 00:08:36.432 { 00:08:36.432 "dma_device_id": "system", 00:08:36.432 "dma_device_type": 1 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.432 "dma_device_type": 2 00:08:36.432 } 00:08:36.432 ], 00:08:36.432 "driver_specific": {} 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 BaseBdev3 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 [ 00:08:36.432 { 00:08:36.432 "name": "BaseBdev3", 00:08:36.432 "aliases": [ 00:08:36.432 "b81c7882-b1cc-48ab-abaa-2859a74582c5" 00:08:36.432 ], 00:08:36.432 "product_name": "Malloc disk", 00:08:36.432 "block_size": 512, 00:08:36.432 "num_blocks": 65536, 00:08:36.432 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:36.432 "assigned_rate_limits": { 00:08:36.432 "rw_ios_per_sec": 0, 00:08:36.432 "rw_mbytes_per_sec": 0, 00:08:36.432 "r_mbytes_per_sec": 0, 00:08:36.432 "w_mbytes_per_sec": 0 00:08:36.432 }, 00:08:36.432 "claimed": false, 00:08:36.432 "zoned": false, 00:08:36.432 "supported_io_types": { 00:08:36.432 "read": true, 00:08:36.432 "write": true, 00:08:36.432 "unmap": true, 00:08:36.432 "flush": true, 00:08:36.432 "reset": true, 00:08:36.432 "nvme_admin": false, 00:08:36.432 "nvme_io": false, 00:08:36.432 "nvme_io_md": false, 00:08:36.432 "write_zeroes": true, 00:08:36.432 "zcopy": true, 00:08:36.432 "get_zone_info": false, 00:08:36.432 "zone_management": false, 00:08:36.432 "zone_append": false, 00:08:36.432 "compare": false, 00:08:36.432 "compare_and_write": false, 00:08:36.432 "abort": true, 00:08:36.432 "seek_hole": false, 00:08:36.432 "seek_data": false, 00:08:36.432 "copy": true, 00:08:36.432 "nvme_iov_md": false 00:08:36.432 }, 00:08:36.432 "memory_domains": [ 00:08:36.432 { 00:08:36.432 "dma_device_id": "system", 00:08:36.432 "dma_device_type": 1 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.432 "dma_device_type": 2 00:08:36.432 } 00:08:36.432 ], 00:08:36.432 "driver_specific": {} 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.432 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 [2024-11-20 03:15:26.037708] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.432 [2024-11-20 03:15:26.037817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.432 [2024-11-20 03:15:26.037870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.433 [2024-11-20 03:15:26.039716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.433 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.693 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.693 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.693 "name": "Existed_Raid", 00:08:36.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.693 "strip_size_kb": 64, 00:08:36.693 "state": "configuring", 00:08:36.693 "raid_level": "concat", 00:08:36.693 "superblock": false, 00:08:36.693 "num_base_bdevs": 3, 00:08:36.693 "num_base_bdevs_discovered": 2, 00:08:36.693 "num_base_bdevs_operational": 3, 00:08:36.693 "base_bdevs_list": [ 00:08:36.693 { 00:08:36.693 "name": "BaseBdev1", 00:08:36.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.693 "is_configured": false, 00:08:36.693 "data_offset": 0, 00:08:36.693 "data_size": 0 00:08:36.693 }, 00:08:36.693 { 00:08:36.693 "name": "BaseBdev2", 00:08:36.693 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:36.693 "is_configured": true, 00:08:36.693 "data_offset": 0, 00:08:36.693 "data_size": 65536 00:08:36.693 }, 00:08:36.693 { 00:08:36.693 "name": "BaseBdev3", 00:08:36.693 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:36.693 "is_configured": true, 00:08:36.693 "data_offset": 0, 00:08:36.693 "data_size": 65536 00:08:36.693 } 00:08:36.693 ] 00:08:36.693 }' 00:08:36.693 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.693 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.953 [2024-11-20 03:15:26.464927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.953 "name": "Existed_Raid", 00:08:36.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.953 "strip_size_kb": 64, 00:08:36.953 "state": "configuring", 00:08:36.953 "raid_level": "concat", 00:08:36.953 "superblock": false, 00:08:36.953 "num_base_bdevs": 3, 00:08:36.953 "num_base_bdevs_discovered": 1, 00:08:36.953 "num_base_bdevs_operational": 3, 00:08:36.953 "base_bdevs_list": [ 00:08:36.953 { 00:08:36.953 "name": "BaseBdev1", 00:08:36.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.953 "is_configured": false, 00:08:36.953 "data_offset": 0, 00:08:36.953 "data_size": 0 00:08:36.953 }, 00:08:36.953 { 00:08:36.953 "name": null, 00:08:36.953 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:36.953 "is_configured": false, 00:08:36.953 "data_offset": 0, 00:08:36.953 "data_size": 65536 00:08:36.953 }, 00:08:36.953 { 00:08:36.953 "name": "BaseBdev3", 00:08:36.953 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:36.953 "is_configured": true, 00:08:36.953 "data_offset": 0, 00:08:36.953 "data_size": 65536 00:08:36.953 } 00:08:36.953 ] 00:08:36.953 }' 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.953 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.524 03:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.524 [2024-11-20 03:15:27.019675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.524 BaseBdev1 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.524 [ 00:08:37.524 { 00:08:37.524 "name": "BaseBdev1", 00:08:37.524 "aliases": [ 00:08:37.524 "708f128f-2eec-47da-8b83-5f8a0d70d2d2" 00:08:37.524 ], 00:08:37.524 "product_name": "Malloc disk", 00:08:37.524 "block_size": 512, 00:08:37.524 "num_blocks": 65536, 00:08:37.524 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:37.524 "assigned_rate_limits": { 00:08:37.524 "rw_ios_per_sec": 0, 00:08:37.524 "rw_mbytes_per_sec": 0, 00:08:37.524 "r_mbytes_per_sec": 0, 00:08:37.524 "w_mbytes_per_sec": 0 00:08:37.524 }, 00:08:37.524 "claimed": true, 00:08:37.524 "claim_type": "exclusive_write", 00:08:37.524 "zoned": false, 00:08:37.524 "supported_io_types": { 00:08:37.524 "read": true, 00:08:37.524 "write": true, 00:08:37.524 "unmap": true, 00:08:37.524 "flush": true, 00:08:37.524 "reset": true, 00:08:37.524 "nvme_admin": false, 00:08:37.524 "nvme_io": false, 00:08:37.524 "nvme_io_md": false, 00:08:37.524 "write_zeroes": true, 00:08:37.524 "zcopy": true, 00:08:37.524 "get_zone_info": false, 00:08:37.524 "zone_management": false, 00:08:37.524 "zone_append": false, 00:08:37.524 "compare": false, 00:08:37.524 "compare_and_write": false, 00:08:37.524 "abort": true, 00:08:37.524 "seek_hole": false, 00:08:37.524 "seek_data": false, 00:08:37.524 "copy": true, 00:08:37.524 "nvme_iov_md": false 00:08:37.524 }, 00:08:37.524 "memory_domains": [ 00:08:37.524 { 00:08:37.524 "dma_device_id": "system", 00:08:37.524 "dma_device_type": 1 00:08:37.524 }, 00:08:37.524 { 00:08:37.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.524 "dma_device_type": 2 00:08:37.524 } 00:08:37.524 ], 00:08:37.524 "driver_specific": {} 00:08:37.524 } 00:08:37.524 ] 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.524 "name": "Existed_Raid", 00:08:37.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.524 "strip_size_kb": 64, 00:08:37.524 "state": "configuring", 00:08:37.524 "raid_level": "concat", 00:08:37.524 "superblock": false, 00:08:37.524 "num_base_bdevs": 3, 00:08:37.524 "num_base_bdevs_discovered": 2, 00:08:37.524 "num_base_bdevs_operational": 3, 00:08:37.524 "base_bdevs_list": [ 00:08:37.524 { 00:08:37.524 "name": "BaseBdev1", 00:08:37.524 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:37.524 "is_configured": true, 00:08:37.524 "data_offset": 0, 00:08:37.524 "data_size": 65536 00:08:37.524 }, 00:08:37.524 { 00:08:37.524 "name": null, 00:08:37.524 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:37.524 "is_configured": false, 00:08:37.524 "data_offset": 0, 00:08:37.524 "data_size": 65536 00:08:37.524 }, 00:08:37.524 { 00:08:37.524 "name": "BaseBdev3", 00:08:37.524 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:37.524 "is_configured": true, 00:08:37.524 "data_offset": 0, 00:08:37.524 "data_size": 65536 00:08:37.524 } 00:08:37.524 ] 00:08:37.524 }' 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.524 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 [2024-11-20 03:15:27.514873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.095 "name": "Existed_Raid", 00:08:38.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.095 "strip_size_kb": 64, 00:08:38.095 "state": "configuring", 00:08:38.095 "raid_level": "concat", 00:08:38.095 "superblock": false, 00:08:38.095 "num_base_bdevs": 3, 00:08:38.095 "num_base_bdevs_discovered": 1, 00:08:38.095 "num_base_bdevs_operational": 3, 00:08:38.095 "base_bdevs_list": [ 00:08:38.095 { 00:08:38.095 "name": "BaseBdev1", 00:08:38.095 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:38.095 "is_configured": true, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 65536 00:08:38.095 }, 00:08:38.095 { 00:08:38.095 "name": null, 00:08:38.095 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:38.095 "is_configured": false, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 65536 00:08:38.095 }, 00:08:38.095 { 00:08:38.095 "name": null, 00:08:38.095 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:38.095 "is_configured": false, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 65536 00:08:38.095 } 00:08:38.095 ] 00:08:38.095 }' 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.095 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.355 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.355 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.355 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.355 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.355 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.615 [2024-11-20 03:15:27.994078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.615 03:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.615 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.615 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.615 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.615 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.615 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.615 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.615 "name": "Existed_Raid", 00:08:38.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.615 "strip_size_kb": 64, 00:08:38.615 "state": "configuring", 00:08:38.615 "raid_level": "concat", 00:08:38.615 "superblock": false, 00:08:38.615 "num_base_bdevs": 3, 00:08:38.615 "num_base_bdevs_discovered": 2, 00:08:38.615 "num_base_bdevs_operational": 3, 00:08:38.615 "base_bdevs_list": [ 00:08:38.615 { 00:08:38.615 "name": "BaseBdev1", 00:08:38.615 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:38.615 "is_configured": true, 00:08:38.615 "data_offset": 0, 00:08:38.615 "data_size": 65536 00:08:38.615 }, 00:08:38.615 { 00:08:38.615 "name": null, 00:08:38.615 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:38.615 "is_configured": false, 00:08:38.615 "data_offset": 0, 00:08:38.615 "data_size": 65536 00:08:38.615 }, 00:08:38.616 { 00:08:38.616 "name": "BaseBdev3", 00:08:38.616 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:38.616 "is_configured": true, 00:08:38.616 "data_offset": 0, 00:08:38.616 "data_size": 65536 00:08:38.616 } 00:08:38.616 ] 00:08:38.616 }' 00:08:38.616 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.616 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.876 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.876 [2024-11-20 03:15:28.473259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.136 "name": "Existed_Raid", 00:08:39.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.136 "strip_size_kb": 64, 00:08:39.136 "state": "configuring", 00:08:39.136 "raid_level": "concat", 00:08:39.136 "superblock": false, 00:08:39.136 "num_base_bdevs": 3, 00:08:39.136 "num_base_bdevs_discovered": 1, 00:08:39.136 "num_base_bdevs_operational": 3, 00:08:39.136 "base_bdevs_list": [ 00:08:39.136 { 00:08:39.136 "name": null, 00:08:39.136 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:39.136 "is_configured": false, 00:08:39.136 "data_offset": 0, 00:08:39.136 "data_size": 65536 00:08:39.136 }, 00:08:39.136 { 00:08:39.136 "name": null, 00:08:39.136 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:39.136 "is_configured": false, 00:08:39.136 "data_offset": 0, 00:08:39.136 "data_size": 65536 00:08:39.136 }, 00:08:39.136 { 00:08:39.136 "name": "BaseBdev3", 00:08:39.136 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:39.136 "is_configured": true, 00:08:39.136 "data_offset": 0, 00:08:39.136 "data_size": 65536 00:08:39.136 } 00:08:39.136 ] 00:08:39.136 }' 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.136 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.396 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.396 03:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.396 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.396 03:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.396 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.660 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:39.660 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.661 [2024-11-20 03:15:29.037084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.661 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.661 "name": "Existed_Raid", 00:08:39.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.661 "strip_size_kb": 64, 00:08:39.661 "state": "configuring", 00:08:39.661 "raid_level": "concat", 00:08:39.661 "superblock": false, 00:08:39.661 "num_base_bdevs": 3, 00:08:39.661 "num_base_bdevs_discovered": 2, 00:08:39.661 "num_base_bdevs_operational": 3, 00:08:39.661 "base_bdevs_list": [ 00:08:39.661 { 00:08:39.661 "name": null, 00:08:39.662 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:39.662 "is_configured": false, 00:08:39.662 "data_offset": 0, 00:08:39.662 "data_size": 65536 00:08:39.662 }, 00:08:39.662 { 00:08:39.662 "name": "BaseBdev2", 00:08:39.662 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:39.662 "is_configured": true, 00:08:39.662 "data_offset": 0, 00:08:39.662 "data_size": 65536 00:08:39.662 }, 00:08:39.662 { 00:08:39.662 "name": "BaseBdev3", 00:08:39.662 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:39.662 "is_configured": true, 00:08:39.662 "data_offset": 0, 00:08:39.662 "data_size": 65536 00:08:39.662 } 00:08:39.662 ] 00:08:39.662 }' 00:08:39.662 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.662 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.924 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:40.184 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.184 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 708f128f-2eec-47da-8b83-5f8a0d70d2d2 00:08:40.184 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.184 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.184 [2024-11-20 03:15:29.620908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:40.185 [2024-11-20 03:15:29.620951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.185 [2024-11-20 03:15:29.620959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:40.185 [2024-11-20 03:15:29.621191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:40.185 [2024-11-20 03:15:29.621331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.185 [2024-11-20 03:15:29.621340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:40.185 [2024-11-20 03:15:29.621585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.185 NewBaseBdev 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 [ 00:08:40.185 { 00:08:40.185 "name": "NewBaseBdev", 00:08:40.185 "aliases": [ 00:08:40.185 "708f128f-2eec-47da-8b83-5f8a0d70d2d2" 00:08:40.185 ], 00:08:40.185 "product_name": "Malloc disk", 00:08:40.185 "block_size": 512, 00:08:40.185 "num_blocks": 65536, 00:08:40.185 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:40.185 "assigned_rate_limits": { 00:08:40.185 "rw_ios_per_sec": 0, 00:08:40.185 "rw_mbytes_per_sec": 0, 00:08:40.185 "r_mbytes_per_sec": 0, 00:08:40.185 "w_mbytes_per_sec": 0 00:08:40.185 }, 00:08:40.185 "claimed": true, 00:08:40.185 "claim_type": "exclusive_write", 00:08:40.185 "zoned": false, 00:08:40.185 "supported_io_types": { 00:08:40.185 "read": true, 00:08:40.185 "write": true, 00:08:40.185 "unmap": true, 00:08:40.185 "flush": true, 00:08:40.185 "reset": true, 00:08:40.185 "nvme_admin": false, 00:08:40.185 "nvme_io": false, 00:08:40.185 "nvme_io_md": false, 00:08:40.185 "write_zeroes": true, 00:08:40.185 "zcopy": true, 00:08:40.185 "get_zone_info": false, 00:08:40.185 "zone_management": false, 00:08:40.185 "zone_append": false, 00:08:40.185 "compare": false, 00:08:40.185 "compare_and_write": false, 00:08:40.185 "abort": true, 00:08:40.185 "seek_hole": false, 00:08:40.185 "seek_data": false, 00:08:40.185 "copy": true, 00:08:40.185 "nvme_iov_md": false 00:08:40.185 }, 00:08:40.185 "memory_domains": [ 00:08:40.185 { 00:08:40.185 "dma_device_id": "system", 00:08:40.185 "dma_device_type": 1 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.185 "dma_device_type": 2 00:08:40.185 } 00:08:40.185 ], 00:08:40.185 "driver_specific": {} 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.185 "name": "Existed_Raid", 00:08:40.185 "uuid": "894d7965-d377-47f7-ad72-65e9b37ebb8f", 00:08:40.185 "strip_size_kb": 64, 00:08:40.185 "state": "online", 00:08:40.185 "raid_level": "concat", 00:08:40.185 "superblock": false, 00:08:40.185 "num_base_bdevs": 3, 00:08:40.185 "num_base_bdevs_discovered": 3, 00:08:40.185 "num_base_bdevs_operational": 3, 00:08:40.185 "base_bdevs_list": [ 00:08:40.185 { 00:08:40.185 "name": "NewBaseBdev", 00:08:40.185 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:40.185 "is_configured": true, 00:08:40.185 "data_offset": 0, 00:08:40.185 "data_size": 65536 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "BaseBdev2", 00:08:40.185 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:40.185 "is_configured": true, 00:08:40.185 "data_offset": 0, 00:08:40.185 "data_size": 65536 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "BaseBdev3", 00:08:40.185 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:40.185 "is_configured": true, 00:08:40.185 "data_offset": 0, 00:08:40.185 "data_size": 65536 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 }' 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.185 03:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.756 [2024-11-20 03:15:30.156339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.756 "name": "Existed_Raid", 00:08:40.756 "aliases": [ 00:08:40.756 "894d7965-d377-47f7-ad72-65e9b37ebb8f" 00:08:40.756 ], 00:08:40.756 "product_name": "Raid Volume", 00:08:40.756 "block_size": 512, 00:08:40.756 "num_blocks": 196608, 00:08:40.756 "uuid": "894d7965-d377-47f7-ad72-65e9b37ebb8f", 00:08:40.756 "assigned_rate_limits": { 00:08:40.756 "rw_ios_per_sec": 0, 00:08:40.756 "rw_mbytes_per_sec": 0, 00:08:40.756 "r_mbytes_per_sec": 0, 00:08:40.756 "w_mbytes_per_sec": 0 00:08:40.756 }, 00:08:40.756 "claimed": false, 00:08:40.756 "zoned": false, 00:08:40.756 "supported_io_types": { 00:08:40.756 "read": true, 00:08:40.756 "write": true, 00:08:40.756 "unmap": true, 00:08:40.756 "flush": true, 00:08:40.756 "reset": true, 00:08:40.756 "nvme_admin": false, 00:08:40.756 "nvme_io": false, 00:08:40.756 "nvme_io_md": false, 00:08:40.756 "write_zeroes": true, 00:08:40.756 "zcopy": false, 00:08:40.756 "get_zone_info": false, 00:08:40.756 "zone_management": false, 00:08:40.756 "zone_append": false, 00:08:40.756 "compare": false, 00:08:40.756 "compare_and_write": false, 00:08:40.756 "abort": false, 00:08:40.756 "seek_hole": false, 00:08:40.756 "seek_data": false, 00:08:40.756 "copy": false, 00:08:40.756 "nvme_iov_md": false 00:08:40.756 }, 00:08:40.756 "memory_domains": [ 00:08:40.756 { 00:08:40.756 "dma_device_id": "system", 00:08:40.756 "dma_device_type": 1 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.756 "dma_device_type": 2 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "dma_device_id": "system", 00:08:40.756 "dma_device_type": 1 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.756 "dma_device_type": 2 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "dma_device_id": "system", 00:08:40.756 "dma_device_type": 1 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.756 "dma_device_type": 2 00:08:40.756 } 00:08:40.756 ], 00:08:40.756 "driver_specific": { 00:08:40.756 "raid": { 00:08:40.756 "uuid": "894d7965-d377-47f7-ad72-65e9b37ebb8f", 00:08:40.756 "strip_size_kb": 64, 00:08:40.756 "state": "online", 00:08:40.756 "raid_level": "concat", 00:08:40.756 "superblock": false, 00:08:40.756 "num_base_bdevs": 3, 00:08:40.756 "num_base_bdevs_discovered": 3, 00:08:40.756 "num_base_bdevs_operational": 3, 00:08:40.756 "base_bdevs_list": [ 00:08:40.756 { 00:08:40.756 "name": "NewBaseBdev", 00:08:40.756 "uuid": "708f128f-2eec-47da-8b83-5f8a0d70d2d2", 00:08:40.756 "is_configured": true, 00:08:40.756 "data_offset": 0, 00:08:40.756 "data_size": 65536 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "name": "BaseBdev2", 00:08:40.756 "uuid": "6ab79ea3-b254-4da7-98d2-82af26674af9", 00:08:40.756 "is_configured": true, 00:08:40.756 "data_offset": 0, 00:08:40.756 "data_size": 65536 00:08:40.756 }, 00:08:40.756 { 00:08:40.756 "name": "BaseBdev3", 00:08:40.756 "uuid": "b81c7882-b1cc-48ab-abaa-2859a74582c5", 00:08:40.756 "is_configured": true, 00:08:40.756 "data_offset": 0, 00:08:40.756 "data_size": 65536 00:08:40.756 } 00:08:40.756 ] 00:08:40.756 } 00:08:40.756 } 00:08:40.756 }' 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.756 BaseBdev2 00:08:40.756 BaseBdev3' 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.756 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.757 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.016 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.016 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.016 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.016 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.017 [2024-11-20 03:15:30.427637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.017 [2024-11-20 03:15:30.427722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.017 [2024-11-20 03:15:30.427842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.017 [2024-11-20 03:15:30.427924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.017 [2024-11-20 03:15:30.427972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65462 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65462 ']' 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65462 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65462 00:08:41.017 killing process with pid 65462 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65462' 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65462 00:08:41.017 [2024-11-20 03:15:30.475066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.017 03:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65462 00:08:41.275 [2024-11-20 03:15:30.771049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.662 00:08:42.662 real 0m10.668s 00:08:42.662 user 0m17.032s 00:08:42.662 sys 0m1.812s 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.662 ************************************ 00:08:42.662 END TEST raid_state_function_test 00:08:42.662 ************************************ 00:08:42.662 03:15:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:42.662 03:15:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:42.662 03:15:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.662 03:15:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.662 ************************************ 00:08:42.662 START TEST raid_state_function_test_sb 00:08:42.662 ************************************ 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66083 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66083' 00:08:42.662 Process raid pid: 66083 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66083 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66083 ']' 00:08:42.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.662 03:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.662 [2024-11-20 03:15:32.044087] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:42.662 [2024-11-20 03:15:32.044209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.662 [2024-11-20 03:15:32.218222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.922 [2024-11-20 03:15:32.333617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.922 [2024-11-20 03:15:32.536908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.922 [2024-11-20 03:15:32.536955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.492 [2024-11-20 03:15:32.885973] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.492 [2024-11-20 03:15:32.886081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.492 [2024-11-20 03:15:32.886132] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.492 [2024-11-20 03:15:32.886158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.492 [2024-11-20 03:15:32.886219] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.492 [2024-11-20 03:15:32.886232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.492 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.492 "name": "Existed_Raid", 00:08:43.492 "uuid": "45762f47-29fa-4d5c-8608-ab1f988734c9", 00:08:43.492 "strip_size_kb": 64, 00:08:43.492 "state": "configuring", 00:08:43.492 "raid_level": "concat", 00:08:43.492 "superblock": true, 00:08:43.492 "num_base_bdevs": 3, 00:08:43.492 "num_base_bdevs_discovered": 0, 00:08:43.492 "num_base_bdevs_operational": 3, 00:08:43.493 "base_bdevs_list": [ 00:08:43.493 { 00:08:43.493 "name": "BaseBdev1", 00:08:43.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.493 "is_configured": false, 00:08:43.493 "data_offset": 0, 00:08:43.493 "data_size": 0 00:08:43.493 }, 00:08:43.493 { 00:08:43.493 "name": "BaseBdev2", 00:08:43.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.493 "is_configured": false, 00:08:43.493 "data_offset": 0, 00:08:43.493 "data_size": 0 00:08:43.493 }, 00:08:43.493 { 00:08:43.493 "name": "BaseBdev3", 00:08:43.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.493 "is_configured": false, 00:08:43.493 "data_offset": 0, 00:08:43.493 "data_size": 0 00:08:43.493 } 00:08:43.493 ] 00:08:43.493 }' 00:08:43.493 03:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.493 03:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.752 [2024-11-20 03:15:33.365115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.752 [2024-11-20 03:15:33.365207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.752 [2024-11-20 03:15:33.377083] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.752 [2024-11-20 03:15:33.377176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.752 [2024-11-20 03:15:33.377229] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.752 [2024-11-20 03:15:33.377254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.752 [2024-11-20 03:15:33.377286] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.752 [2024-11-20 03:15:33.377315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.752 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.013 [2024-11-20 03:15:33.425813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.013 BaseBdev1 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.013 [ 00:08:44.013 { 00:08:44.013 "name": "BaseBdev1", 00:08:44.013 "aliases": [ 00:08:44.013 "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25" 00:08:44.013 ], 00:08:44.013 "product_name": "Malloc disk", 00:08:44.013 "block_size": 512, 00:08:44.013 "num_blocks": 65536, 00:08:44.013 "uuid": "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25", 00:08:44.013 "assigned_rate_limits": { 00:08:44.013 "rw_ios_per_sec": 0, 00:08:44.013 "rw_mbytes_per_sec": 0, 00:08:44.013 "r_mbytes_per_sec": 0, 00:08:44.013 "w_mbytes_per_sec": 0 00:08:44.013 }, 00:08:44.013 "claimed": true, 00:08:44.013 "claim_type": "exclusive_write", 00:08:44.013 "zoned": false, 00:08:44.013 "supported_io_types": { 00:08:44.013 "read": true, 00:08:44.013 "write": true, 00:08:44.013 "unmap": true, 00:08:44.013 "flush": true, 00:08:44.013 "reset": true, 00:08:44.013 "nvme_admin": false, 00:08:44.013 "nvme_io": false, 00:08:44.013 "nvme_io_md": false, 00:08:44.013 "write_zeroes": true, 00:08:44.013 "zcopy": true, 00:08:44.013 "get_zone_info": false, 00:08:44.013 "zone_management": false, 00:08:44.013 "zone_append": false, 00:08:44.013 "compare": false, 00:08:44.013 "compare_and_write": false, 00:08:44.013 "abort": true, 00:08:44.013 "seek_hole": false, 00:08:44.013 "seek_data": false, 00:08:44.013 "copy": true, 00:08:44.013 "nvme_iov_md": false 00:08:44.013 }, 00:08:44.013 "memory_domains": [ 00:08:44.013 { 00:08:44.013 "dma_device_id": "system", 00:08:44.013 "dma_device_type": 1 00:08:44.013 }, 00:08:44.013 { 00:08:44.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.013 "dma_device_type": 2 00:08:44.013 } 00:08:44.013 ], 00:08:44.013 "driver_specific": {} 00:08:44.013 } 00:08:44.013 ] 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.013 "name": "Existed_Raid", 00:08:44.013 "uuid": "f10aa69f-dba1-49b3-9cb6-83b7ad4a22ad", 00:08:44.013 "strip_size_kb": 64, 00:08:44.013 "state": "configuring", 00:08:44.013 "raid_level": "concat", 00:08:44.013 "superblock": true, 00:08:44.013 "num_base_bdevs": 3, 00:08:44.013 "num_base_bdevs_discovered": 1, 00:08:44.013 "num_base_bdevs_operational": 3, 00:08:44.013 "base_bdevs_list": [ 00:08:44.013 { 00:08:44.013 "name": "BaseBdev1", 00:08:44.013 "uuid": "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25", 00:08:44.013 "is_configured": true, 00:08:44.013 "data_offset": 2048, 00:08:44.013 "data_size": 63488 00:08:44.013 }, 00:08:44.013 { 00:08:44.013 "name": "BaseBdev2", 00:08:44.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.013 "is_configured": false, 00:08:44.013 "data_offset": 0, 00:08:44.013 "data_size": 0 00:08:44.013 }, 00:08:44.013 { 00:08:44.013 "name": "BaseBdev3", 00:08:44.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.013 "is_configured": false, 00:08:44.013 "data_offset": 0, 00:08:44.013 "data_size": 0 00:08:44.013 } 00:08:44.013 ] 00:08:44.013 }' 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.013 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.583 [2024-11-20 03:15:33.964979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.583 [2024-11-20 03:15:33.965091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.583 [2024-11-20 03:15:33.977010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.583 [2024-11-20 03:15:33.978901] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.583 [2024-11-20 03:15:33.978990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.583 [2024-11-20 03:15:33.979043] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.583 [2024-11-20 03:15:33.979078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.583 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.584 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.584 03:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.584 03:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.584 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.584 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.584 "name": "Existed_Raid", 00:08:44.584 "uuid": "bf6941a5-f3d4-4ddc-8295-559524c59e04", 00:08:44.584 "strip_size_kb": 64, 00:08:44.584 "state": "configuring", 00:08:44.584 "raid_level": "concat", 00:08:44.584 "superblock": true, 00:08:44.584 "num_base_bdevs": 3, 00:08:44.584 "num_base_bdevs_discovered": 1, 00:08:44.584 "num_base_bdevs_operational": 3, 00:08:44.584 "base_bdevs_list": [ 00:08:44.584 { 00:08:44.584 "name": "BaseBdev1", 00:08:44.584 "uuid": "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25", 00:08:44.584 "is_configured": true, 00:08:44.584 "data_offset": 2048, 00:08:44.584 "data_size": 63488 00:08:44.584 }, 00:08:44.584 { 00:08:44.584 "name": "BaseBdev2", 00:08:44.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.584 "is_configured": false, 00:08:44.584 "data_offset": 0, 00:08:44.584 "data_size": 0 00:08:44.584 }, 00:08:44.584 { 00:08:44.584 "name": "BaseBdev3", 00:08:44.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.584 "is_configured": false, 00:08:44.584 "data_offset": 0, 00:08:44.584 "data_size": 0 00:08:44.584 } 00:08:44.584 ] 00:08:44.584 }' 00:08:44.584 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.584 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.844 [2024-11-20 03:15:34.456207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.844 BaseBdev2 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:44.844 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.845 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.104 [ 00:08:45.104 { 00:08:45.104 "name": "BaseBdev2", 00:08:45.104 "aliases": [ 00:08:45.104 "f1997988-7853-499d-9350-44fb8e978204" 00:08:45.104 ], 00:08:45.104 "product_name": "Malloc disk", 00:08:45.104 "block_size": 512, 00:08:45.104 "num_blocks": 65536, 00:08:45.104 "uuid": "f1997988-7853-499d-9350-44fb8e978204", 00:08:45.104 "assigned_rate_limits": { 00:08:45.104 "rw_ios_per_sec": 0, 00:08:45.104 "rw_mbytes_per_sec": 0, 00:08:45.104 "r_mbytes_per_sec": 0, 00:08:45.104 "w_mbytes_per_sec": 0 00:08:45.104 }, 00:08:45.105 "claimed": true, 00:08:45.105 "claim_type": "exclusive_write", 00:08:45.105 "zoned": false, 00:08:45.105 "supported_io_types": { 00:08:45.105 "read": true, 00:08:45.105 "write": true, 00:08:45.105 "unmap": true, 00:08:45.105 "flush": true, 00:08:45.105 "reset": true, 00:08:45.105 "nvme_admin": false, 00:08:45.105 "nvme_io": false, 00:08:45.105 "nvme_io_md": false, 00:08:45.105 "write_zeroes": true, 00:08:45.105 "zcopy": true, 00:08:45.105 "get_zone_info": false, 00:08:45.105 "zone_management": false, 00:08:45.105 "zone_append": false, 00:08:45.105 "compare": false, 00:08:45.105 "compare_and_write": false, 00:08:45.105 "abort": true, 00:08:45.105 "seek_hole": false, 00:08:45.105 "seek_data": false, 00:08:45.105 "copy": true, 00:08:45.105 "nvme_iov_md": false 00:08:45.105 }, 00:08:45.105 "memory_domains": [ 00:08:45.105 { 00:08:45.105 "dma_device_id": "system", 00:08:45.105 "dma_device_type": 1 00:08:45.105 }, 00:08:45.105 { 00:08:45.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.105 "dma_device_type": 2 00:08:45.105 } 00:08:45.105 ], 00:08:45.105 "driver_specific": {} 00:08:45.105 } 00:08:45.105 ] 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.105 "name": "Existed_Raid", 00:08:45.105 "uuid": "bf6941a5-f3d4-4ddc-8295-559524c59e04", 00:08:45.105 "strip_size_kb": 64, 00:08:45.105 "state": "configuring", 00:08:45.105 "raid_level": "concat", 00:08:45.105 "superblock": true, 00:08:45.105 "num_base_bdevs": 3, 00:08:45.105 "num_base_bdevs_discovered": 2, 00:08:45.105 "num_base_bdevs_operational": 3, 00:08:45.105 "base_bdevs_list": [ 00:08:45.105 { 00:08:45.105 "name": "BaseBdev1", 00:08:45.105 "uuid": "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25", 00:08:45.105 "is_configured": true, 00:08:45.105 "data_offset": 2048, 00:08:45.105 "data_size": 63488 00:08:45.105 }, 00:08:45.105 { 00:08:45.105 "name": "BaseBdev2", 00:08:45.105 "uuid": "f1997988-7853-499d-9350-44fb8e978204", 00:08:45.105 "is_configured": true, 00:08:45.105 "data_offset": 2048, 00:08:45.105 "data_size": 63488 00:08:45.105 }, 00:08:45.105 { 00:08:45.105 "name": "BaseBdev3", 00:08:45.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.105 "is_configured": false, 00:08:45.105 "data_offset": 0, 00:08:45.105 "data_size": 0 00:08:45.105 } 00:08:45.105 ] 00:08:45.105 }' 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.105 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.365 03:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.365 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.365 03:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.625 [2024-11-20 03:15:35.014339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.625 [2024-11-20 03:15:35.014724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.625 [2024-11-20 03:15:35.014791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.625 [2024-11-20 03:15:35.015088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:45.625 BaseBdev3 00:08:45.625 [2024-11-20 03:15:35.015287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.625 [2024-11-20 03:15:35.015334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:45.626 [2024-11-20 03:15:35.015521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.626 [ 00:08:45.626 { 00:08:45.626 "name": "BaseBdev3", 00:08:45.626 "aliases": [ 00:08:45.626 "408572bf-e211-47cd-bc5a-98e41f476621" 00:08:45.626 ], 00:08:45.626 "product_name": "Malloc disk", 00:08:45.626 "block_size": 512, 00:08:45.626 "num_blocks": 65536, 00:08:45.626 "uuid": "408572bf-e211-47cd-bc5a-98e41f476621", 00:08:45.626 "assigned_rate_limits": { 00:08:45.626 "rw_ios_per_sec": 0, 00:08:45.626 "rw_mbytes_per_sec": 0, 00:08:45.626 "r_mbytes_per_sec": 0, 00:08:45.626 "w_mbytes_per_sec": 0 00:08:45.626 }, 00:08:45.626 "claimed": true, 00:08:45.626 "claim_type": "exclusive_write", 00:08:45.626 "zoned": false, 00:08:45.626 "supported_io_types": { 00:08:45.626 "read": true, 00:08:45.626 "write": true, 00:08:45.626 "unmap": true, 00:08:45.626 "flush": true, 00:08:45.626 "reset": true, 00:08:45.626 "nvme_admin": false, 00:08:45.626 "nvme_io": false, 00:08:45.626 "nvme_io_md": false, 00:08:45.626 "write_zeroes": true, 00:08:45.626 "zcopy": true, 00:08:45.626 "get_zone_info": false, 00:08:45.626 "zone_management": false, 00:08:45.626 "zone_append": false, 00:08:45.626 "compare": false, 00:08:45.626 "compare_and_write": false, 00:08:45.626 "abort": true, 00:08:45.626 "seek_hole": false, 00:08:45.626 "seek_data": false, 00:08:45.626 "copy": true, 00:08:45.626 "nvme_iov_md": false 00:08:45.626 }, 00:08:45.626 "memory_domains": [ 00:08:45.626 { 00:08:45.626 "dma_device_id": "system", 00:08:45.626 "dma_device_type": 1 00:08:45.626 }, 00:08:45.626 { 00:08:45.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.626 "dma_device_type": 2 00:08:45.626 } 00:08:45.626 ], 00:08:45.626 "driver_specific": {} 00:08:45.626 } 00:08:45.626 ] 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.626 "name": "Existed_Raid", 00:08:45.626 "uuid": "bf6941a5-f3d4-4ddc-8295-559524c59e04", 00:08:45.626 "strip_size_kb": 64, 00:08:45.626 "state": "online", 00:08:45.626 "raid_level": "concat", 00:08:45.626 "superblock": true, 00:08:45.626 "num_base_bdevs": 3, 00:08:45.626 "num_base_bdevs_discovered": 3, 00:08:45.626 "num_base_bdevs_operational": 3, 00:08:45.626 "base_bdevs_list": [ 00:08:45.626 { 00:08:45.626 "name": "BaseBdev1", 00:08:45.626 "uuid": "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25", 00:08:45.626 "is_configured": true, 00:08:45.626 "data_offset": 2048, 00:08:45.626 "data_size": 63488 00:08:45.626 }, 00:08:45.626 { 00:08:45.626 "name": "BaseBdev2", 00:08:45.626 "uuid": "f1997988-7853-499d-9350-44fb8e978204", 00:08:45.626 "is_configured": true, 00:08:45.626 "data_offset": 2048, 00:08:45.626 "data_size": 63488 00:08:45.626 }, 00:08:45.626 { 00:08:45.626 "name": "BaseBdev3", 00:08:45.626 "uuid": "408572bf-e211-47cd-bc5a-98e41f476621", 00:08:45.626 "is_configured": true, 00:08:45.626 "data_offset": 2048, 00:08:45.626 "data_size": 63488 00:08:45.626 } 00:08:45.626 ] 00:08:45.626 }' 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.626 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.886 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 [2024-11-20 03:15:35.525844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.146 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.146 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.146 "name": "Existed_Raid", 00:08:46.146 "aliases": [ 00:08:46.146 "bf6941a5-f3d4-4ddc-8295-559524c59e04" 00:08:46.146 ], 00:08:46.146 "product_name": "Raid Volume", 00:08:46.146 "block_size": 512, 00:08:46.146 "num_blocks": 190464, 00:08:46.146 "uuid": "bf6941a5-f3d4-4ddc-8295-559524c59e04", 00:08:46.146 "assigned_rate_limits": { 00:08:46.146 "rw_ios_per_sec": 0, 00:08:46.146 "rw_mbytes_per_sec": 0, 00:08:46.146 "r_mbytes_per_sec": 0, 00:08:46.146 "w_mbytes_per_sec": 0 00:08:46.146 }, 00:08:46.146 "claimed": false, 00:08:46.146 "zoned": false, 00:08:46.146 "supported_io_types": { 00:08:46.146 "read": true, 00:08:46.146 "write": true, 00:08:46.146 "unmap": true, 00:08:46.146 "flush": true, 00:08:46.146 "reset": true, 00:08:46.146 "nvme_admin": false, 00:08:46.146 "nvme_io": false, 00:08:46.146 "nvme_io_md": false, 00:08:46.146 "write_zeroes": true, 00:08:46.146 "zcopy": false, 00:08:46.146 "get_zone_info": false, 00:08:46.146 "zone_management": false, 00:08:46.146 "zone_append": false, 00:08:46.146 "compare": false, 00:08:46.146 "compare_and_write": false, 00:08:46.146 "abort": false, 00:08:46.146 "seek_hole": false, 00:08:46.146 "seek_data": false, 00:08:46.146 "copy": false, 00:08:46.146 "nvme_iov_md": false 00:08:46.146 }, 00:08:46.147 "memory_domains": [ 00:08:46.147 { 00:08:46.147 "dma_device_id": "system", 00:08:46.147 "dma_device_type": 1 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.147 "dma_device_type": 2 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "dma_device_id": "system", 00:08:46.147 "dma_device_type": 1 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.147 "dma_device_type": 2 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "dma_device_id": "system", 00:08:46.147 "dma_device_type": 1 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.147 "dma_device_type": 2 00:08:46.147 } 00:08:46.147 ], 00:08:46.147 "driver_specific": { 00:08:46.147 "raid": { 00:08:46.147 "uuid": "bf6941a5-f3d4-4ddc-8295-559524c59e04", 00:08:46.147 "strip_size_kb": 64, 00:08:46.147 "state": "online", 00:08:46.147 "raid_level": "concat", 00:08:46.147 "superblock": true, 00:08:46.147 "num_base_bdevs": 3, 00:08:46.147 "num_base_bdevs_discovered": 3, 00:08:46.147 "num_base_bdevs_operational": 3, 00:08:46.147 "base_bdevs_list": [ 00:08:46.147 { 00:08:46.147 "name": "BaseBdev1", 00:08:46.147 "uuid": "00ae3bb4-b305-4d3c-ad13-cf7375dc3f25", 00:08:46.147 "is_configured": true, 00:08:46.147 "data_offset": 2048, 00:08:46.147 "data_size": 63488 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "name": "BaseBdev2", 00:08:46.147 "uuid": "f1997988-7853-499d-9350-44fb8e978204", 00:08:46.147 "is_configured": true, 00:08:46.147 "data_offset": 2048, 00:08:46.147 "data_size": 63488 00:08:46.147 }, 00:08:46.147 { 00:08:46.147 "name": "BaseBdev3", 00:08:46.147 "uuid": "408572bf-e211-47cd-bc5a-98e41f476621", 00:08:46.147 "is_configured": true, 00:08:46.147 "data_offset": 2048, 00:08:46.147 "data_size": 63488 00:08:46.147 } 00:08:46.147 ] 00:08:46.147 } 00:08:46.147 } 00:08:46.147 }' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.147 BaseBdev2 00:08:46.147 BaseBdev3' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.147 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.147 [2024-11-20 03:15:35.777177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.147 [2024-11-20 03:15:35.777265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.147 [2024-11-20 03:15:35.777354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.407 "name": "Existed_Raid", 00:08:46.407 "uuid": "bf6941a5-f3d4-4ddc-8295-559524c59e04", 00:08:46.407 "strip_size_kb": 64, 00:08:46.407 "state": "offline", 00:08:46.407 "raid_level": "concat", 00:08:46.407 "superblock": true, 00:08:46.407 "num_base_bdevs": 3, 00:08:46.407 "num_base_bdevs_discovered": 2, 00:08:46.407 "num_base_bdevs_operational": 2, 00:08:46.407 "base_bdevs_list": [ 00:08:46.407 { 00:08:46.407 "name": null, 00:08:46.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.407 "is_configured": false, 00:08:46.407 "data_offset": 0, 00:08:46.407 "data_size": 63488 00:08:46.407 }, 00:08:46.407 { 00:08:46.407 "name": "BaseBdev2", 00:08:46.407 "uuid": "f1997988-7853-499d-9350-44fb8e978204", 00:08:46.407 "is_configured": true, 00:08:46.407 "data_offset": 2048, 00:08:46.407 "data_size": 63488 00:08:46.407 }, 00:08:46.407 { 00:08:46.407 "name": "BaseBdev3", 00:08:46.407 "uuid": "408572bf-e211-47cd-bc5a-98e41f476621", 00:08:46.407 "is_configured": true, 00:08:46.407 "data_offset": 2048, 00:08:46.407 "data_size": 63488 00:08:46.407 } 00:08:46.407 ] 00:08:46.407 }' 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.407 03:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 [2024-11-20 03:15:36.361693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.977 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 [2024-11-20 03:15:36.514336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.977 [2024-11-20 03:15:36.514442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 BaseBdev2 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 [ 00:08:47.238 { 00:08:47.238 "name": "BaseBdev2", 00:08:47.238 "aliases": [ 00:08:47.238 "724cf8ab-5ae6-4401-b047-48957661bbde" 00:08:47.238 ], 00:08:47.238 "product_name": "Malloc disk", 00:08:47.238 "block_size": 512, 00:08:47.238 "num_blocks": 65536, 00:08:47.238 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:47.238 "assigned_rate_limits": { 00:08:47.238 "rw_ios_per_sec": 0, 00:08:47.238 "rw_mbytes_per_sec": 0, 00:08:47.238 "r_mbytes_per_sec": 0, 00:08:47.238 "w_mbytes_per_sec": 0 00:08:47.238 }, 00:08:47.238 "claimed": false, 00:08:47.238 "zoned": false, 00:08:47.238 "supported_io_types": { 00:08:47.238 "read": true, 00:08:47.238 "write": true, 00:08:47.238 "unmap": true, 00:08:47.238 "flush": true, 00:08:47.238 "reset": true, 00:08:47.238 "nvme_admin": false, 00:08:47.238 "nvme_io": false, 00:08:47.238 "nvme_io_md": false, 00:08:47.238 "write_zeroes": true, 00:08:47.238 "zcopy": true, 00:08:47.238 "get_zone_info": false, 00:08:47.238 "zone_management": false, 00:08:47.238 "zone_append": false, 00:08:47.238 "compare": false, 00:08:47.238 "compare_and_write": false, 00:08:47.238 "abort": true, 00:08:47.238 "seek_hole": false, 00:08:47.238 "seek_data": false, 00:08:47.238 "copy": true, 00:08:47.238 "nvme_iov_md": false 00:08:47.238 }, 00:08:47.238 "memory_domains": [ 00:08:47.238 { 00:08:47.238 "dma_device_id": "system", 00:08:47.238 "dma_device_type": 1 00:08:47.238 }, 00:08:47.238 { 00:08:47.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.238 "dma_device_type": 2 00:08:47.238 } 00:08:47.238 ], 00:08:47.238 "driver_specific": {} 00:08:47.238 } 00:08:47.238 ] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 BaseBdev3 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.239 [ 00:08:47.239 { 00:08:47.239 "name": "BaseBdev3", 00:08:47.239 "aliases": [ 00:08:47.239 "075dad88-bde5-4c9b-936f-0afed50fff2f" 00:08:47.239 ], 00:08:47.239 "product_name": "Malloc disk", 00:08:47.239 "block_size": 512, 00:08:47.239 "num_blocks": 65536, 00:08:47.239 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:47.239 "assigned_rate_limits": { 00:08:47.239 "rw_ios_per_sec": 0, 00:08:47.239 "rw_mbytes_per_sec": 0, 00:08:47.239 "r_mbytes_per_sec": 0, 00:08:47.239 "w_mbytes_per_sec": 0 00:08:47.239 }, 00:08:47.239 "claimed": false, 00:08:47.239 "zoned": false, 00:08:47.239 "supported_io_types": { 00:08:47.239 "read": true, 00:08:47.239 "write": true, 00:08:47.239 "unmap": true, 00:08:47.239 "flush": true, 00:08:47.239 "reset": true, 00:08:47.239 "nvme_admin": false, 00:08:47.239 "nvme_io": false, 00:08:47.239 "nvme_io_md": false, 00:08:47.239 "write_zeroes": true, 00:08:47.239 "zcopy": true, 00:08:47.239 "get_zone_info": false, 00:08:47.239 "zone_management": false, 00:08:47.239 "zone_append": false, 00:08:47.239 "compare": false, 00:08:47.239 "compare_and_write": false, 00:08:47.239 "abort": true, 00:08:47.239 "seek_hole": false, 00:08:47.239 "seek_data": false, 00:08:47.239 "copy": true, 00:08:47.239 "nvme_iov_md": false 00:08:47.239 }, 00:08:47.239 "memory_domains": [ 00:08:47.239 { 00:08:47.239 "dma_device_id": "system", 00:08:47.239 "dma_device_type": 1 00:08:47.239 }, 00:08:47.239 { 00:08:47.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.239 "dma_device_type": 2 00:08:47.239 } 00:08:47.239 ], 00:08:47.239 "driver_specific": {} 00:08:47.239 } 00:08:47.239 ] 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.239 [2024-11-20 03:15:36.829520] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.239 [2024-11-20 03:15:36.829670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.239 [2024-11-20 03:15:36.829743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.239 [2024-11-20 03:15:36.831664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.239 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.499 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.499 "name": "Existed_Raid", 00:08:47.499 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:47.499 "strip_size_kb": 64, 00:08:47.499 "state": "configuring", 00:08:47.499 "raid_level": "concat", 00:08:47.499 "superblock": true, 00:08:47.499 "num_base_bdevs": 3, 00:08:47.499 "num_base_bdevs_discovered": 2, 00:08:47.499 "num_base_bdevs_operational": 3, 00:08:47.499 "base_bdevs_list": [ 00:08:47.499 { 00:08:47.499 "name": "BaseBdev1", 00:08:47.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.499 "is_configured": false, 00:08:47.499 "data_offset": 0, 00:08:47.499 "data_size": 0 00:08:47.499 }, 00:08:47.499 { 00:08:47.499 "name": "BaseBdev2", 00:08:47.499 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:47.499 "is_configured": true, 00:08:47.499 "data_offset": 2048, 00:08:47.499 "data_size": 63488 00:08:47.499 }, 00:08:47.499 { 00:08:47.499 "name": "BaseBdev3", 00:08:47.499 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:47.499 "is_configured": true, 00:08:47.499 "data_offset": 2048, 00:08:47.499 "data_size": 63488 00:08:47.499 } 00:08:47.499 ] 00:08:47.499 }' 00:08:47.499 03:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.499 03:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.759 [2024-11-20 03:15:37.304677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.759 "name": "Existed_Raid", 00:08:47.759 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:47.759 "strip_size_kb": 64, 00:08:47.759 "state": "configuring", 00:08:47.759 "raid_level": "concat", 00:08:47.759 "superblock": true, 00:08:47.759 "num_base_bdevs": 3, 00:08:47.759 "num_base_bdevs_discovered": 1, 00:08:47.759 "num_base_bdevs_operational": 3, 00:08:47.759 "base_bdevs_list": [ 00:08:47.759 { 00:08:47.759 "name": "BaseBdev1", 00:08:47.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.759 "is_configured": false, 00:08:47.759 "data_offset": 0, 00:08:47.759 "data_size": 0 00:08:47.759 }, 00:08:47.759 { 00:08:47.759 "name": null, 00:08:47.759 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:47.759 "is_configured": false, 00:08:47.759 "data_offset": 0, 00:08:47.759 "data_size": 63488 00:08:47.759 }, 00:08:47.759 { 00:08:47.759 "name": "BaseBdev3", 00:08:47.759 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:47.759 "is_configured": true, 00:08:47.759 "data_offset": 2048, 00:08:47.759 "data_size": 63488 00:08:47.759 } 00:08:47.759 ] 00:08:47.759 }' 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.759 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 [2024-11-20 03:15:37.832446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.330 BaseBdev1 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.330 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 [ 00:08:48.330 { 00:08:48.330 "name": "BaseBdev1", 00:08:48.330 "aliases": [ 00:08:48.330 "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0" 00:08:48.330 ], 00:08:48.330 "product_name": "Malloc disk", 00:08:48.330 "block_size": 512, 00:08:48.330 "num_blocks": 65536, 00:08:48.330 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:48.330 "assigned_rate_limits": { 00:08:48.330 "rw_ios_per_sec": 0, 00:08:48.330 "rw_mbytes_per_sec": 0, 00:08:48.330 "r_mbytes_per_sec": 0, 00:08:48.330 "w_mbytes_per_sec": 0 00:08:48.330 }, 00:08:48.330 "claimed": true, 00:08:48.330 "claim_type": "exclusive_write", 00:08:48.330 "zoned": false, 00:08:48.330 "supported_io_types": { 00:08:48.330 "read": true, 00:08:48.330 "write": true, 00:08:48.330 "unmap": true, 00:08:48.330 "flush": true, 00:08:48.330 "reset": true, 00:08:48.330 "nvme_admin": false, 00:08:48.330 "nvme_io": false, 00:08:48.330 "nvme_io_md": false, 00:08:48.330 "write_zeroes": true, 00:08:48.330 "zcopy": true, 00:08:48.330 "get_zone_info": false, 00:08:48.330 "zone_management": false, 00:08:48.330 "zone_append": false, 00:08:48.330 "compare": false, 00:08:48.330 "compare_and_write": false, 00:08:48.330 "abort": true, 00:08:48.330 "seek_hole": false, 00:08:48.330 "seek_data": false, 00:08:48.330 "copy": true, 00:08:48.330 "nvme_iov_md": false 00:08:48.330 }, 00:08:48.330 "memory_domains": [ 00:08:48.330 { 00:08:48.330 "dma_device_id": "system", 00:08:48.330 "dma_device_type": 1 00:08:48.330 }, 00:08:48.330 { 00:08:48.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.331 "dma_device_type": 2 00:08:48.331 } 00:08:48.331 ], 00:08:48.331 "driver_specific": {} 00:08:48.331 } 00:08:48.331 ] 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.331 "name": "Existed_Raid", 00:08:48.331 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:48.331 "strip_size_kb": 64, 00:08:48.331 "state": "configuring", 00:08:48.331 "raid_level": "concat", 00:08:48.331 "superblock": true, 00:08:48.331 "num_base_bdevs": 3, 00:08:48.331 "num_base_bdevs_discovered": 2, 00:08:48.331 "num_base_bdevs_operational": 3, 00:08:48.331 "base_bdevs_list": [ 00:08:48.331 { 00:08:48.331 "name": "BaseBdev1", 00:08:48.331 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:48.331 "is_configured": true, 00:08:48.331 "data_offset": 2048, 00:08:48.331 "data_size": 63488 00:08:48.331 }, 00:08:48.331 { 00:08:48.331 "name": null, 00:08:48.331 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:48.331 "is_configured": false, 00:08:48.331 "data_offset": 0, 00:08:48.331 "data_size": 63488 00:08:48.331 }, 00:08:48.331 { 00:08:48.331 "name": "BaseBdev3", 00:08:48.331 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:48.331 "is_configured": true, 00:08:48.331 "data_offset": 2048, 00:08:48.331 "data_size": 63488 00:08:48.331 } 00:08:48.331 ] 00:08:48.331 }' 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.331 03:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.902 [2024-11-20 03:15:38.387586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.902 "name": "Existed_Raid", 00:08:48.902 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:48.902 "strip_size_kb": 64, 00:08:48.902 "state": "configuring", 00:08:48.902 "raid_level": "concat", 00:08:48.902 "superblock": true, 00:08:48.902 "num_base_bdevs": 3, 00:08:48.902 "num_base_bdevs_discovered": 1, 00:08:48.902 "num_base_bdevs_operational": 3, 00:08:48.902 "base_bdevs_list": [ 00:08:48.902 { 00:08:48.902 "name": "BaseBdev1", 00:08:48.902 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:48.902 "is_configured": true, 00:08:48.902 "data_offset": 2048, 00:08:48.902 "data_size": 63488 00:08:48.902 }, 00:08:48.902 { 00:08:48.902 "name": null, 00:08:48.902 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:48.902 "is_configured": false, 00:08:48.902 "data_offset": 0, 00:08:48.902 "data_size": 63488 00:08:48.902 }, 00:08:48.902 { 00:08:48.902 "name": null, 00:08:48.902 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:48.902 "is_configured": false, 00:08:48.902 "data_offset": 0, 00:08:48.902 "data_size": 63488 00:08:48.902 } 00:08:48.902 ] 00:08:48.902 }' 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.902 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.473 [2024-11-20 03:15:38.898795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.473 "name": "Existed_Raid", 00:08:49.473 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:49.473 "strip_size_kb": 64, 00:08:49.473 "state": "configuring", 00:08:49.473 "raid_level": "concat", 00:08:49.473 "superblock": true, 00:08:49.473 "num_base_bdevs": 3, 00:08:49.473 "num_base_bdevs_discovered": 2, 00:08:49.473 "num_base_bdevs_operational": 3, 00:08:49.473 "base_bdevs_list": [ 00:08:49.473 { 00:08:49.473 "name": "BaseBdev1", 00:08:49.473 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:49.473 "is_configured": true, 00:08:49.473 "data_offset": 2048, 00:08:49.473 "data_size": 63488 00:08:49.473 }, 00:08:49.473 { 00:08:49.473 "name": null, 00:08:49.473 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:49.473 "is_configured": false, 00:08:49.473 "data_offset": 0, 00:08:49.473 "data_size": 63488 00:08:49.473 }, 00:08:49.473 { 00:08:49.473 "name": "BaseBdev3", 00:08:49.473 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:49.473 "is_configured": true, 00:08:49.473 "data_offset": 2048, 00:08:49.473 "data_size": 63488 00:08:49.473 } 00:08:49.473 ] 00:08:49.473 }' 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.473 03:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.733 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.733 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.733 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.733 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.733 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.994 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:49.994 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.994 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.994 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.994 [2024-11-20 03:15:39.390105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.994 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.995 "name": "Existed_Raid", 00:08:49.995 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:49.995 "strip_size_kb": 64, 00:08:49.995 "state": "configuring", 00:08:49.995 "raid_level": "concat", 00:08:49.995 "superblock": true, 00:08:49.995 "num_base_bdevs": 3, 00:08:49.995 "num_base_bdevs_discovered": 1, 00:08:49.995 "num_base_bdevs_operational": 3, 00:08:49.995 "base_bdevs_list": [ 00:08:49.995 { 00:08:49.995 "name": null, 00:08:49.995 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:49.995 "is_configured": false, 00:08:49.995 "data_offset": 0, 00:08:49.995 "data_size": 63488 00:08:49.995 }, 00:08:49.995 { 00:08:49.995 "name": null, 00:08:49.995 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:49.995 "is_configured": false, 00:08:49.995 "data_offset": 0, 00:08:49.995 "data_size": 63488 00:08:49.995 }, 00:08:49.995 { 00:08:49.995 "name": "BaseBdev3", 00:08:49.995 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:49.995 "is_configured": true, 00:08:49.995 "data_offset": 2048, 00:08:49.995 "data_size": 63488 00:08:49.995 } 00:08:49.995 ] 00:08:49.995 }' 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.995 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.564 [2024-11-20 03:15:39.935502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.564 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.564 "name": "Existed_Raid", 00:08:50.564 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:50.564 "strip_size_kb": 64, 00:08:50.564 "state": "configuring", 00:08:50.564 "raid_level": "concat", 00:08:50.564 "superblock": true, 00:08:50.564 "num_base_bdevs": 3, 00:08:50.564 "num_base_bdevs_discovered": 2, 00:08:50.564 "num_base_bdevs_operational": 3, 00:08:50.564 "base_bdevs_list": [ 00:08:50.564 { 00:08:50.564 "name": null, 00:08:50.564 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:50.564 "is_configured": false, 00:08:50.564 "data_offset": 0, 00:08:50.564 "data_size": 63488 00:08:50.564 }, 00:08:50.564 { 00:08:50.565 "name": "BaseBdev2", 00:08:50.565 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:50.565 "is_configured": true, 00:08:50.565 "data_offset": 2048, 00:08:50.565 "data_size": 63488 00:08:50.565 }, 00:08:50.565 { 00:08:50.565 "name": "BaseBdev3", 00:08:50.565 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:50.565 "is_configured": true, 00:08:50.565 "data_offset": 2048, 00:08:50.565 "data_size": 63488 00:08:50.565 } 00:08:50.565 ] 00:08:50.565 }' 00:08:50.565 03:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.565 03:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:50.825 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d9f39bc5-f8ac-4ee6-940e-43c0d98451d0 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.085 [2024-11-20 03:15:40.524241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:51.085 [2024-11-20 03:15:40.524555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:51.085 NewBaseBdev 00:08:51.085 [2024-11-20 03:15:40.524644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.085 [2024-11-20 03:15:40.524938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.085 [2024-11-20 03:15:40.525094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:51.085 [2024-11-20 03:15:40.525105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:51.085 [2024-11-20 03:15:40.525286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.085 [ 00:08:51.085 { 00:08:51.085 "name": "NewBaseBdev", 00:08:51.085 "aliases": [ 00:08:51.085 "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0" 00:08:51.085 ], 00:08:51.085 "product_name": "Malloc disk", 00:08:51.085 "block_size": 512, 00:08:51.085 "num_blocks": 65536, 00:08:51.085 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:51.085 "assigned_rate_limits": { 00:08:51.085 "rw_ios_per_sec": 0, 00:08:51.085 "rw_mbytes_per_sec": 0, 00:08:51.085 "r_mbytes_per_sec": 0, 00:08:51.085 "w_mbytes_per_sec": 0 00:08:51.085 }, 00:08:51.085 "claimed": true, 00:08:51.085 "claim_type": "exclusive_write", 00:08:51.085 "zoned": false, 00:08:51.085 "supported_io_types": { 00:08:51.085 "read": true, 00:08:51.085 "write": true, 00:08:51.085 "unmap": true, 00:08:51.085 "flush": true, 00:08:51.085 "reset": true, 00:08:51.085 "nvme_admin": false, 00:08:51.085 "nvme_io": false, 00:08:51.085 "nvme_io_md": false, 00:08:51.085 "write_zeroes": true, 00:08:51.085 "zcopy": true, 00:08:51.085 "get_zone_info": false, 00:08:51.085 "zone_management": false, 00:08:51.085 "zone_append": false, 00:08:51.085 "compare": false, 00:08:51.085 "compare_and_write": false, 00:08:51.085 "abort": true, 00:08:51.085 "seek_hole": false, 00:08:51.085 "seek_data": false, 00:08:51.085 "copy": true, 00:08:51.085 "nvme_iov_md": false 00:08:51.085 }, 00:08:51.085 "memory_domains": [ 00:08:51.085 { 00:08:51.085 "dma_device_id": "system", 00:08:51.085 "dma_device_type": 1 00:08:51.085 }, 00:08:51.085 { 00:08:51.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.085 "dma_device_type": 2 00:08:51.085 } 00:08:51.085 ], 00:08:51.085 "driver_specific": {} 00:08:51.085 } 00:08:51.085 ] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.085 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.085 "name": "Existed_Raid", 00:08:51.085 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:51.085 "strip_size_kb": 64, 00:08:51.085 "state": "online", 00:08:51.085 "raid_level": "concat", 00:08:51.085 "superblock": true, 00:08:51.085 "num_base_bdevs": 3, 00:08:51.085 "num_base_bdevs_discovered": 3, 00:08:51.085 "num_base_bdevs_operational": 3, 00:08:51.085 "base_bdevs_list": [ 00:08:51.086 { 00:08:51.086 "name": "NewBaseBdev", 00:08:51.086 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:51.086 "is_configured": true, 00:08:51.086 "data_offset": 2048, 00:08:51.086 "data_size": 63488 00:08:51.086 }, 00:08:51.086 { 00:08:51.086 "name": "BaseBdev2", 00:08:51.086 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:51.086 "is_configured": true, 00:08:51.086 "data_offset": 2048, 00:08:51.086 "data_size": 63488 00:08:51.086 }, 00:08:51.086 { 00:08:51.086 "name": "BaseBdev3", 00:08:51.086 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:51.086 "is_configured": true, 00:08:51.086 "data_offset": 2048, 00:08:51.086 "data_size": 63488 00:08:51.086 } 00:08:51.086 ] 00:08:51.086 }' 00:08:51.086 03:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.086 03:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.657 [2024-11-20 03:15:41.079664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.657 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.657 "name": "Existed_Raid", 00:08:51.657 "aliases": [ 00:08:51.657 "ba61d146-78b4-4eab-b52b-3997c0bd0fbd" 00:08:51.657 ], 00:08:51.657 "product_name": "Raid Volume", 00:08:51.657 "block_size": 512, 00:08:51.657 "num_blocks": 190464, 00:08:51.658 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:51.658 "assigned_rate_limits": { 00:08:51.658 "rw_ios_per_sec": 0, 00:08:51.658 "rw_mbytes_per_sec": 0, 00:08:51.658 "r_mbytes_per_sec": 0, 00:08:51.658 "w_mbytes_per_sec": 0 00:08:51.658 }, 00:08:51.658 "claimed": false, 00:08:51.658 "zoned": false, 00:08:51.658 "supported_io_types": { 00:08:51.658 "read": true, 00:08:51.658 "write": true, 00:08:51.658 "unmap": true, 00:08:51.658 "flush": true, 00:08:51.658 "reset": true, 00:08:51.658 "nvme_admin": false, 00:08:51.658 "nvme_io": false, 00:08:51.658 "nvme_io_md": false, 00:08:51.658 "write_zeroes": true, 00:08:51.658 "zcopy": false, 00:08:51.658 "get_zone_info": false, 00:08:51.658 "zone_management": false, 00:08:51.658 "zone_append": false, 00:08:51.658 "compare": false, 00:08:51.658 "compare_and_write": false, 00:08:51.658 "abort": false, 00:08:51.658 "seek_hole": false, 00:08:51.658 "seek_data": false, 00:08:51.658 "copy": false, 00:08:51.658 "nvme_iov_md": false 00:08:51.658 }, 00:08:51.658 "memory_domains": [ 00:08:51.658 { 00:08:51.658 "dma_device_id": "system", 00:08:51.658 "dma_device_type": 1 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.658 "dma_device_type": 2 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "dma_device_id": "system", 00:08:51.658 "dma_device_type": 1 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.658 "dma_device_type": 2 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "dma_device_id": "system", 00:08:51.658 "dma_device_type": 1 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.658 "dma_device_type": 2 00:08:51.658 } 00:08:51.658 ], 00:08:51.658 "driver_specific": { 00:08:51.658 "raid": { 00:08:51.658 "uuid": "ba61d146-78b4-4eab-b52b-3997c0bd0fbd", 00:08:51.658 "strip_size_kb": 64, 00:08:51.658 "state": "online", 00:08:51.658 "raid_level": "concat", 00:08:51.658 "superblock": true, 00:08:51.658 "num_base_bdevs": 3, 00:08:51.658 "num_base_bdevs_discovered": 3, 00:08:51.658 "num_base_bdevs_operational": 3, 00:08:51.658 "base_bdevs_list": [ 00:08:51.658 { 00:08:51.658 "name": "NewBaseBdev", 00:08:51.658 "uuid": "d9f39bc5-f8ac-4ee6-940e-43c0d98451d0", 00:08:51.658 "is_configured": true, 00:08:51.658 "data_offset": 2048, 00:08:51.658 "data_size": 63488 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "name": "BaseBdev2", 00:08:51.658 "uuid": "724cf8ab-5ae6-4401-b047-48957661bbde", 00:08:51.658 "is_configured": true, 00:08:51.658 "data_offset": 2048, 00:08:51.658 "data_size": 63488 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "name": "BaseBdev3", 00:08:51.658 "uuid": "075dad88-bde5-4c9b-936f-0afed50fff2f", 00:08:51.658 "is_configured": true, 00:08:51.658 "data_offset": 2048, 00:08:51.658 "data_size": 63488 00:08:51.658 } 00:08:51.658 ] 00:08:51.658 } 00:08:51.658 } 00:08:51.658 }' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:51.658 BaseBdev2 00:08:51.658 BaseBdev3' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.658 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.927 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.928 [2024-11-20 03:15:41.382850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.928 [2024-11-20 03:15:41.382953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.928 [2024-11-20 03:15:41.383066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.928 [2024-11-20 03:15:41.383142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.928 [2024-11-20 03:15:41.383209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66083 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66083 ']' 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66083 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66083 00:08:51.928 killing process with pid 66083 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66083' 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66083 00:08:51.928 [2024-11-20 03:15:41.430517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.928 03:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66083 00:08:52.187 [2024-11-20 03:15:41.734363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.567 03:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.567 00:08:53.567 real 0m10.906s 00:08:53.567 user 0m17.391s 00:08:53.567 sys 0m1.921s 00:08:53.567 03:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.567 03:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.567 ************************************ 00:08:53.567 END TEST raid_state_function_test_sb 00:08:53.567 ************************************ 00:08:53.567 03:15:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:53.567 03:15:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:53.567 03:15:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.567 03:15:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.567 ************************************ 00:08:53.567 START TEST raid_superblock_test 00:08:53.567 ************************************ 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66709 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66709 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66709 ']' 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.567 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.568 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.568 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.568 03:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.568 [2024-11-20 03:15:43.009592] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:53.568 [2024-11-20 03:15:43.009805] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66709 ] 00:08:53.568 [2024-11-20 03:15:43.186800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.827 [2024-11-20 03:15:43.301514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.087 [2024-11-20 03:15:43.502624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.087 [2024-11-20 03:15:43.502694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.347 malloc1 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.347 [2024-11-20 03:15:43.904984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.347 [2024-11-20 03:15:43.905118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.347 [2024-11-20 03:15:43.905164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:54.347 [2024-11-20 03:15:43.905194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.347 [2024-11-20 03:15:43.907435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.347 [2024-11-20 03:15:43.907511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.347 pt1 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.347 malloc2 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.347 [2024-11-20 03:15:43.964212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.347 [2024-11-20 03:15:43.964346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.347 [2024-11-20 03:15:43.964386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:54.347 [2024-11-20 03:15:43.964417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.347 [2024-11-20 03:15:43.966715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.347 [2024-11-20 03:15:43.966791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.347 pt2 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.347 03:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.607 malloc3 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.607 [2024-11-20 03:15:44.036503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:54.607 [2024-11-20 03:15:44.036635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.607 [2024-11-20 03:15:44.036678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:54.607 [2024-11-20 03:15:44.036719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.607 [2024-11-20 03:15:44.038882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.607 [2024-11-20 03:15:44.038957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:54.607 pt3 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.607 [2024-11-20 03:15:44.048548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.607 [2024-11-20 03:15:44.050407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.607 [2024-11-20 03:15:44.050517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:54.607 [2024-11-20 03:15:44.050723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:54.607 [2024-11-20 03:15:44.050771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.607 [2024-11-20 03:15:44.051065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.607 [2024-11-20 03:15:44.051272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:54.607 [2024-11-20 03:15:44.051316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:54.607 [2024-11-20 03:15:44.051515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.607 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.607 "name": "raid_bdev1", 00:08:54.607 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:54.607 "strip_size_kb": 64, 00:08:54.607 "state": "online", 00:08:54.607 "raid_level": "concat", 00:08:54.607 "superblock": true, 00:08:54.607 "num_base_bdevs": 3, 00:08:54.607 "num_base_bdevs_discovered": 3, 00:08:54.607 "num_base_bdevs_operational": 3, 00:08:54.607 "base_bdevs_list": [ 00:08:54.607 { 00:08:54.607 "name": "pt1", 00:08:54.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.607 "is_configured": true, 00:08:54.607 "data_offset": 2048, 00:08:54.607 "data_size": 63488 00:08:54.607 }, 00:08:54.607 { 00:08:54.607 "name": "pt2", 00:08:54.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.608 "is_configured": true, 00:08:54.608 "data_offset": 2048, 00:08:54.608 "data_size": 63488 00:08:54.608 }, 00:08:54.608 { 00:08:54.608 "name": "pt3", 00:08:54.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.608 "is_configured": true, 00:08:54.608 "data_offset": 2048, 00:08:54.608 "data_size": 63488 00:08:54.608 } 00:08:54.608 ] 00:08:54.608 }' 00:08:54.608 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.608 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.178 [2024-11-20 03:15:44.520089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.178 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.178 "name": "raid_bdev1", 00:08:55.178 "aliases": [ 00:08:55.178 "e06bd4e9-02e2-411c-9be1-3e5e11dec782" 00:08:55.178 ], 00:08:55.178 "product_name": "Raid Volume", 00:08:55.178 "block_size": 512, 00:08:55.178 "num_blocks": 190464, 00:08:55.178 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:55.179 "assigned_rate_limits": { 00:08:55.179 "rw_ios_per_sec": 0, 00:08:55.179 "rw_mbytes_per_sec": 0, 00:08:55.179 "r_mbytes_per_sec": 0, 00:08:55.179 "w_mbytes_per_sec": 0 00:08:55.179 }, 00:08:55.179 "claimed": false, 00:08:55.179 "zoned": false, 00:08:55.179 "supported_io_types": { 00:08:55.179 "read": true, 00:08:55.179 "write": true, 00:08:55.179 "unmap": true, 00:08:55.179 "flush": true, 00:08:55.179 "reset": true, 00:08:55.179 "nvme_admin": false, 00:08:55.179 "nvme_io": false, 00:08:55.179 "nvme_io_md": false, 00:08:55.179 "write_zeroes": true, 00:08:55.179 "zcopy": false, 00:08:55.179 "get_zone_info": false, 00:08:55.179 "zone_management": false, 00:08:55.179 "zone_append": false, 00:08:55.179 "compare": false, 00:08:55.179 "compare_and_write": false, 00:08:55.179 "abort": false, 00:08:55.179 "seek_hole": false, 00:08:55.179 "seek_data": false, 00:08:55.179 "copy": false, 00:08:55.179 "nvme_iov_md": false 00:08:55.179 }, 00:08:55.179 "memory_domains": [ 00:08:55.179 { 00:08:55.179 "dma_device_id": "system", 00:08:55.179 "dma_device_type": 1 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.179 "dma_device_type": 2 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "dma_device_id": "system", 00:08:55.179 "dma_device_type": 1 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.179 "dma_device_type": 2 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "dma_device_id": "system", 00:08:55.179 "dma_device_type": 1 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.179 "dma_device_type": 2 00:08:55.179 } 00:08:55.179 ], 00:08:55.179 "driver_specific": { 00:08:55.179 "raid": { 00:08:55.179 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:55.179 "strip_size_kb": 64, 00:08:55.179 "state": "online", 00:08:55.179 "raid_level": "concat", 00:08:55.179 "superblock": true, 00:08:55.179 "num_base_bdevs": 3, 00:08:55.179 "num_base_bdevs_discovered": 3, 00:08:55.179 "num_base_bdevs_operational": 3, 00:08:55.179 "base_bdevs_list": [ 00:08:55.179 { 00:08:55.179 "name": "pt1", 00:08:55.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.179 "is_configured": true, 00:08:55.179 "data_offset": 2048, 00:08:55.179 "data_size": 63488 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "name": "pt2", 00:08:55.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.179 "is_configured": true, 00:08:55.179 "data_offset": 2048, 00:08:55.179 "data_size": 63488 00:08:55.179 }, 00:08:55.179 { 00:08:55.179 "name": "pt3", 00:08:55.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.179 "is_configured": true, 00:08:55.179 "data_offset": 2048, 00:08:55.179 "data_size": 63488 00:08:55.179 } 00:08:55.179 ] 00:08:55.179 } 00:08:55.179 } 00:08:55.179 }' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:55.179 pt2 00:08:55.179 pt3' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.179 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.179 [2024-11-20 03:15:44.795586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e06bd4e9-02e2-411c-9be1-3e5e11dec782 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e06bd4e9-02e2-411c-9be1-3e5e11dec782 ']' 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 [2024-11-20 03:15:44.843190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.440 [2024-11-20 03:15:44.843225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.440 [2024-11-20 03:15:44.843319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.440 [2024-11-20 03:15:44.843387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.440 [2024-11-20 03:15:44.843399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.440 03:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.440 [2024-11-20 03:15:44.994934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:55.440 [2024-11-20 03:15:44.996948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:55.440 [2024-11-20 03:15:44.997063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:55.440 [2024-11-20 03:15:44.997144] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:55.440 [2024-11-20 03:15:44.997272] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:55.440 [2024-11-20 03:15:44.997339] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:55.440 [2024-11-20 03:15:44.997401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.440 [2024-11-20 03:15:44.997437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:55.440 request: 00:08:55.440 { 00:08:55.440 "name": "raid_bdev1", 00:08:55.440 "raid_level": "concat", 00:08:55.440 "base_bdevs": [ 00:08:55.440 "malloc1", 00:08:55.440 "malloc2", 00:08:55.440 "malloc3" 00:08:55.440 ], 00:08:55.440 "strip_size_kb": 64, 00:08:55.440 "superblock": false, 00:08:55.440 "method": "bdev_raid_create", 00:08:55.440 "req_id": 1 00:08:55.440 } 00:08:55.440 Got JSON-RPC error response 00:08:55.440 response: 00:08:55.440 { 00:08:55.440 "code": -17, 00:08:55.440 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:55.440 } 00:08:55.440 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:55.440 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:55.440 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:55.440 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.441 [2024-11-20 03:15:45.058783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.441 [2024-11-20 03:15:45.058890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.441 [2024-11-20 03:15:45.058930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:55.441 [2024-11-20 03:15:45.058980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.441 [2024-11-20 03:15:45.061256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.441 [2024-11-20 03:15:45.061332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.441 [2024-11-20 03:15:45.061440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.441 [2024-11-20 03:15:45.061517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.441 pt1 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.441 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.701 "name": "raid_bdev1", 00:08:55.701 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:55.701 "strip_size_kb": 64, 00:08:55.701 "state": "configuring", 00:08:55.701 "raid_level": "concat", 00:08:55.701 "superblock": true, 00:08:55.701 "num_base_bdevs": 3, 00:08:55.701 "num_base_bdevs_discovered": 1, 00:08:55.701 "num_base_bdevs_operational": 3, 00:08:55.701 "base_bdevs_list": [ 00:08:55.701 { 00:08:55.701 "name": "pt1", 00:08:55.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.701 "is_configured": true, 00:08:55.701 "data_offset": 2048, 00:08:55.701 "data_size": 63488 00:08:55.701 }, 00:08:55.701 { 00:08:55.701 "name": null, 00:08:55.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.701 "is_configured": false, 00:08:55.701 "data_offset": 2048, 00:08:55.701 "data_size": 63488 00:08:55.701 }, 00:08:55.701 { 00:08:55.701 "name": null, 00:08:55.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.701 "is_configured": false, 00:08:55.701 "data_offset": 2048, 00:08:55.701 "data_size": 63488 00:08:55.701 } 00:08:55.701 ] 00:08:55.701 }' 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.701 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.961 [2024-11-20 03:15:45.530014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.961 [2024-11-20 03:15:45.530133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.961 [2024-11-20 03:15:45.530172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:55.961 [2024-11-20 03:15:45.530200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.961 [2024-11-20 03:15:45.530737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.961 [2024-11-20 03:15:45.530801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.961 [2024-11-20 03:15:45.530934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.961 [2024-11-20 03:15:45.530986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.961 pt2 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.961 [2024-11-20 03:15:45.541993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.961 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.222 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.222 "name": "raid_bdev1", 00:08:56.222 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:56.222 "strip_size_kb": 64, 00:08:56.222 "state": "configuring", 00:08:56.222 "raid_level": "concat", 00:08:56.222 "superblock": true, 00:08:56.222 "num_base_bdevs": 3, 00:08:56.222 "num_base_bdevs_discovered": 1, 00:08:56.222 "num_base_bdevs_operational": 3, 00:08:56.222 "base_bdevs_list": [ 00:08:56.222 { 00:08:56.222 "name": "pt1", 00:08:56.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.222 "is_configured": true, 00:08:56.222 "data_offset": 2048, 00:08:56.222 "data_size": 63488 00:08:56.222 }, 00:08:56.222 { 00:08:56.222 "name": null, 00:08:56.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.222 "is_configured": false, 00:08:56.222 "data_offset": 0, 00:08:56.222 "data_size": 63488 00:08:56.222 }, 00:08:56.222 { 00:08:56.222 "name": null, 00:08:56.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.222 "is_configured": false, 00:08:56.222 "data_offset": 2048, 00:08:56.222 "data_size": 63488 00:08:56.222 } 00:08:56.222 ] 00:08:56.222 }' 00:08:56.222 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.222 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.482 [2024-11-20 03:15:45.965261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:56.482 [2024-11-20 03:15:45.965388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.482 [2024-11-20 03:15:45.965424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:56.482 [2024-11-20 03:15:45.965453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.482 [2024-11-20 03:15:45.966011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.482 [2024-11-20 03:15:45.966079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:56.482 [2024-11-20 03:15:45.966201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:56.482 [2024-11-20 03:15:45.966258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:56.482 pt2 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.482 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.482 [2024-11-20 03:15:45.977209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:56.483 [2024-11-20 03:15:45.977301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.483 [2024-11-20 03:15:45.977347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:56.483 [2024-11-20 03:15:45.977377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.483 [2024-11-20 03:15:45.977828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.483 [2024-11-20 03:15:45.977900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:56.483 [2024-11-20 03:15:45.977993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:56.483 [2024-11-20 03:15:45.978041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:56.483 [2024-11-20 03:15:45.978189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.483 [2024-11-20 03:15:45.978229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.483 [2024-11-20 03:15:45.978498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:56.483 [2024-11-20 03:15:45.978724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.483 [2024-11-20 03:15:45.978769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:56.483 [2024-11-20 03:15:45.978973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.483 pt3 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.483 03:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.483 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.483 "name": "raid_bdev1", 00:08:56.483 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:56.483 "strip_size_kb": 64, 00:08:56.483 "state": "online", 00:08:56.483 "raid_level": "concat", 00:08:56.483 "superblock": true, 00:08:56.483 "num_base_bdevs": 3, 00:08:56.483 "num_base_bdevs_discovered": 3, 00:08:56.483 "num_base_bdevs_operational": 3, 00:08:56.483 "base_bdevs_list": [ 00:08:56.483 { 00:08:56.483 "name": "pt1", 00:08:56.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.483 "is_configured": true, 00:08:56.483 "data_offset": 2048, 00:08:56.483 "data_size": 63488 00:08:56.483 }, 00:08:56.483 { 00:08:56.483 "name": "pt2", 00:08:56.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.483 "is_configured": true, 00:08:56.483 "data_offset": 2048, 00:08:56.483 "data_size": 63488 00:08:56.483 }, 00:08:56.483 { 00:08:56.483 "name": "pt3", 00:08:56.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.483 "is_configured": true, 00:08:56.483 "data_offset": 2048, 00:08:56.483 "data_size": 63488 00:08:56.483 } 00:08:56.483 ] 00:08:56.483 }' 00:08:56.483 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.483 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.053 [2024-11-20 03:15:46.476711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.053 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.053 "name": "raid_bdev1", 00:08:57.053 "aliases": [ 00:08:57.053 "e06bd4e9-02e2-411c-9be1-3e5e11dec782" 00:08:57.053 ], 00:08:57.053 "product_name": "Raid Volume", 00:08:57.053 "block_size": 512, 00:08:57.053 "num_blocks": 190464, 00:08:57.053 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:57.053 "assigned_rate_limits": { 00:08:57.053 "rw_ios_per_sec": 0, 00:08:57.053 "rw_mbytes_per_sec": 0, 00:08:57.053 "r_mbytes_per_sec": 0, 00:08:57.053 "w_mbytes_per_sec": 0 00:08:57.053 }, 00:08:57.053 "claimed": false, 00:08:57.053 "zoned": false, 00:08:57.053 "supported_io_types": { 00:08:57.053 "read": true, 00:08:57.053 "write": true, 00:08:57.053 "unmap": true, 00:08:57.053 "flush": true, 00:08:57.053 "reset": true, 00:08:57.053 "nvme_admin": false, 00:08:57.053 "nvme_io": false, 00:08:57.053 "nvme_io_md": false, 00:08:57.053 "write_zeroes": true, 00:08:57.053 "zcopy": false, 00:08:57.053 "get_zone_info": false, 00:08:57.053 "zone_management": false, 00:08:57.053 "zone_append": false, 00:08:57.053 "compare": false, 00:08:57.053 "compare_and_write": false, 00:08:57.053 "abort": false, 00:08:57.053 "seek_hole": false, 00:08:57.053 "seek_data": false, 00:08:57.053 "copy": false, 00:08:57.053 "nvme_iov_md": false 00:08:57.053 }, 00:08:57.053 "memory_domains": [ 00:08:57.053 { 00:08:57.053 "dma_device_id": "system", 00:08:57.054 "dma_device_type": 1 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.054 "dma_device_type": 2 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "dma_device_id": "system", 00:08:57.054 "dma_device_type": 1 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.054 "dma_device_type": 2 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "dma_device_id": "system", 00:08:57.054 "dma_device_type": 1 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.054 "dma_device_type": 2 00:08:57.054 } 00:08:57.054 ], 00:08:57.054 "driver_specific": { 00:08:57.054 "raid": { 00:08:57.054 "uuid": "e06bd4e9-02e2-411c-9be1-3e5e11dec782", 00:08:57.054 "strip_size_kb": 64, 00:08:57.054 "state": "online", 00:08:57.054 "raid_level": "concat", 00:08:57.054 "superblock": true, 00:08:57.054 "num_base_bdevs": 3, 00:08:57.054 "num_base_bdevs_discovered": 3, 00:08:57.054 "num_base_bdevs_operational": 3, 00:08:57.054 "base_bdevs_list": [ 00:08:57.054 { 00:08:57.054 "name": "pt1", 00:08:57.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.054 "is_configured": true, 00:08:57.054 "data_offset": 2048, 00:08:57.054 "data_size": 63488 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "name": "pt2", 00:08:57.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.054 "is_configured": true, 00:08:57.054 "data_offset": 2048, 00:08:57.054 "data_size": 63488 00:08:57.054 }, 00:08:57.054 { 00:08:57.054 "name": "pt3", 00:08:57.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.054 "is_configured": true, 00:08:57.054 "data_offset": 2048, 00:08:57.054 "data_size": 63488 00:08:57.054 } 00:08:57.054 ] 00:08:57.054 } 00:08:57.054 } 00:08:57.054 }' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.054 pt2 00:08:57.054 pt3' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.054 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:57.314 [2024-11-20 03:15:46.760227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e06bd4e9-02e2-411c-9be1-3e5e11dec782 '!=' e06bd4e9-02e2-411c-9be1-3e5e11dec782 ']' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66709 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66709 ']' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66709 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66709 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66709' 00:08:57.314 killing process with pid 66709 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66709 00:08:57.314 [2024-11-20 03:15:46.848325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.314 [2024-11-20 03:15:46.848492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.314 03:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66709 00:08:57.314 [2024-11-20 03:15:46.848592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.314 [2024-11-20 03:15:46.848664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:57.575 [2024-11-20 03:15:47.157787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.956 03:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:58.956 ************************************ 00:08:58.956 END TEST raid_superblock_test 00:08:58.956 ************************************ 00:08:58.956 00:08:58.956 real 0m5.347s 00:08:58.956 user 0m7.751s 00:08:58.956 sys 0m0.883s 00:08:58.956 03:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.956 03:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.956 03:15:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:58.956 03:15:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.956 03:15:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.956 03:15:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.956 ************************************ 00:08:58.956 START TEST raid_read_error_test 00:08:58.956 ************************************ 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JH30GSWlDk 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66962 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66962 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66962 ']' 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.956 03:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.956 [2024-11-20 03:15:48.440186] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:58.957 [2024-11-20 03:15:48.440383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66962 ] 00:08:59.217 [2024-11-20 03:15:48.618365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.217 [2024-11-20 03:15:48.737072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.476 [2024-11-20 03:15:48.928263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.476 [2024-11-20 03:15:48.928388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.742 BaseBdev1_malloc 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.742 true 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.742 [2024-11-20 03:15:49.347577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.742 [2024-11-20 03:15:49.347704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.742 [2024-11-20 03:15:49.347746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.742 [2024-11-20 03:15:49.347776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.742 [2024-11-20 03:15:49.349930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.742 [2024-11-20 03:15:49.350022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.742 BaseBdev1 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.742 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 BaseBdev2_malloc 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 true 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.018 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 [2024-11-20 03:15:49.419331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.018 [2024-11-20 03:15:49.419452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.018 [2024-11-20 03:15:49.419494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:00.018 [2024-11-20 03:15:49.419545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.018 [2024-11-20 03:15:49.422021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.018 [2024-11-20 03:15:49.422108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.018 BaseBdev2 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 BaseBdev3_malloc 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 true 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 [2024-11-20 03:15:49.503239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:00.019 [2024-11-20 03:15:49.503356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.019 [2024-11-20 03:15:49.503397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:00.019 [2024-11-20 03:15:49.503428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.019 [2024-11-20 03:15:49.507120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.019 [2024-11-20 03:15:49.507208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:00.019 BaseBdev3 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 [2024-11-20 03:15:49.515456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.019 [2024-11-20 03:15:49.517491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.019 [2024-11-20 03:15:49.517640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.019 [2024-11-20 03:15:49.517888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.019 [2024-11-20 03:15:49.517936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.019 [2024-11-20 03:15:49.518243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:00.019 [2024-11-20 03:15:49.518458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.019 [2024-11-20 03:15:49.518504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:00.019 [2024-11-20 03:15:49.518768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.019 "name": "raid_bdev1", 00:09:00.019 "uuid": "1ff31472-cc87-4a50-892f-c05234c7ae20", 00:09:00.019 "strip_size_kb": 64, 00:09:00.019 "state": "online", 00:09:00.019 "raid_level": "concat", 00:09:00.019 "superblock": true, 00:09:00.019 "num_base_bdevs": 3, 00:09:00.019 "num_base_bdevs_discovered": 3, 00:09:00.019 "num_base_bdevs_operational": 3, 00:09:00.019 "base_bdevs_list": [ 00:09:00.019 { 00:09:00.019 "name": "BaseBdev1", 00:09:00.019 "uuid": "bb2dcaee-b241-55a9-893d-0bffd5d9b61c", 00:09:00.019 "is_configured": true, 00:09:00.019 "data_offset": 2048, 00:09:00.019 "data_size": 63488 00:09:00.019 }, 00:09:00.019 { 00:09:00.019 "name": "BaseBdev2", 00:09:00.019 "uuid": "8e158f3a-2631-564c-94b3-3c3b52cd47d5", 00:09:00.019 "is_configured": true, 00:09:00.019 "data_offset": 2048, 00:09:00.019 "data_size": 63488 00:09:00.019 }, 00:09:00.019 { 00:09:00.019 "name": "BaseBdev3", 00:09:00.019 "uuid": "557909c1-5e74-5afa-8467-4fbb0f50b97f", 00:09:00.019 "is_configured": true, 00:09:00.019 "data_offset": 2048, 00:09:00.019 "data_size": 63488 00:09:00.019 } 00:09:00.019 ] 00:09:00.019 }' 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.019 03:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.587 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:00.587 03:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:00.587 [2024-11-20 03:15:50.067839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.528 03:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.528 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.528 03:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.528 "name": "raid_bdev1", 00:09:01.528 "uuid": "1ff31472-cc87-4a50-892f-c05234c7ae20", 00:09:01.528 "strip_size_kb": 64, 00:09:01.528 "state": "online", 00:09:01.528 "raid_level": "concat", 00:09:01.528 "superblock": true, 00:09:01.528 "num_base_bdevs": 3, 00:09:01.528 "num_base_bdevs_discovered": 3, 00:09:01.528 "num_base_bdevs_operational": 3, 00:09:01.528 "base_bdevs_list": [ 00:09:01.528 { 00:09:01.528 "name": "BaseBdev1", 00:09:01.528 "uuid": "bb2dcaee-b241-55a9-893d-0bffd5d9b61c", 00:09:01.528 "is_configured": true, 00:09:01.528 "data_offset": 2048, 00:09:01.528 "data_size": 63488 00:09:01.528 }, 00:09:01.528 { 00:09:01.528 "name": "BaseBdev2", 00:09:01.528 "uuid": "8e158f3a-2631-564c-94b3-3c3b52cd47d5", 00:09:01.528 "is_configured": true, 00:09:01.528 "data_offset": 2048, 00:09:01.528 "data_size": 63488 00:09:01.528 }, 00:09:01.528 { 00:09:01.528 "name": "BaseBdev3", 00:09:01.528 "uuid": "557909c1-5e74-5afa-8467-4fbb0f50b97f", 00:09:01.528 "is_configured": true, 00:09:01.528 "data_offset": 2048, 00:09:01.528 "data_size": 63488 00:09:01.528 } 00:09:01.528 ] 00:09:01.528 }' 00:09:01.528 03:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.528 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.098 [2024-11-20 03:15:51.452071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.098 [2024-11-20 03:15:51.452180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.098 [2024-11-20 03:15:51.454990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.098 [2024-11-20 03:15:51.455083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.098 [2024-11-20 03:15:51.455140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.098 [2024-11-20 03:15:51.455183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:02.098 { 00:09:02.098 "results": [ 00:09:02.098 { 00:09:02.098 "job": "raid_bdev1", 00:09:02.098 "core_mask": "0x1", 00:09:02.098 "workload": "randrw", 00:09:02.098 "percentage": 50, 00:09:02.098 "status": "finished", 00:09:02.098 "queue_depth": 1, 00:09:02.098 "io_size": 131072, 00:09:02.098 "runtime": 1.385195, 00:09:02.098 "iops": 15584.087438952638, 00:09:02.098 "mibps": 1948.0109298690797, 00:09:02.098 "io_failed": 1, 00:09:02.098 "io_timeout": 0, 00:09:02.098 "avg_latency_us": 89.25875656296195, 00:09:02.098 "min_latency_us": 26.717903930131005, 00:09:02.098 "max_latency_us": 1488.1537117903931 00:09:02.098 } 00:09:02.098 ], 00:09:02.098 "core_count": 1 00:09:02.098 } 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66962 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66962 ']' 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66962 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66962 00:09:02.098 killing process with pid 66962 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66962' 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66962 00:09:02.098 [2024-11-20 03:15:51.500324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.098 03:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66962 00:09:02.359 [2024-11-20 03:15:51.730842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JH30GSWlDk 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:03.298 ************************************ 00:09:03.298 END TEST raid_read_error_test 00:09:03.298 ************************************ 00:09:03.298 00:09:03.298 real 0m4.565s 00:09:03.298 user 0m5.461s 00:09:03.298 sys 0m0.533s 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.298 03:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.558 03:15:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:03.558 03:15:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:03.558 03:15:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.558 03:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.558 ************************************ 00:09:03.558 START TEST raid_write_error_test 00:09:03.558 ************************************ 00:09:03.558 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:03.558 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:03.558 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:03.558 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:03.558 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:03.558 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U46BLE3zeJ 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67113 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67113 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67113 ']' 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.559 03:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.559 [2024-11-20 03:15:53.072400] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:03.559 [2024-11-20 03:15:53.072534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67113 ] 00:09:03.819 [2024-11-20 03:15:53.257045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.819 [2024-11-20 03:15:53.372963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.079 [2024-11-20 03:15:53.568761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.079 [2024-11-20 03:15:53.568806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.340 BaseBdev1_malloc 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.340 true 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.340 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.340 [2024-11-20 03:15:53.971821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:04.600 [2024-11-20 03:15:53.971928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.600 [2024-11-20 03:15:53.971954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:04.600 [2024-11-20 03:15:53.971965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.600 [2024-11-20 03:15:53.974299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.600 [2024-11-20 03:15:53.974393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:04.600 BaseBdev1 00:09:04.600 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.600 03:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:04.600 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 BaseBdev2_malloc 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 true 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 [2024-11-20 03:15:54.037657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.600 [2024-11-20 03:15:54.037716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.600 [2024-11-20 03:15:54.037734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:04.600 [2024-11-20 03:15:54.037745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.600 [2024-11-20 03:15:54.039942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.600 [2024-11-20 03:15:54.040023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.600 BaseBdev2 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 BaseBdev3_malloc 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 true 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 [2024-11-20 03:15:54.111842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:04.600 [2024-11-20 03:15:54.111899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.600 [2024-11-20 03:15:54.111917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:04.600 [2024-11-20 03:15:54.111927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.600 [2024-11-20 03:15:54.114050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.600 [2024-11-20 03:15:54.114091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:04.600 BaseBdev3 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 [2024-11-20 03:15:54.123895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.600 [2024-11-20 03:15:54.125693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.600 [2024-11-20 03:15:54.125773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.600 [2024-11-20 03:15:54.125975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.600 [2024-11-20 03:15:54.125987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.601 [2024-11-20 03:15:54.126243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:04.601 [2024-11-20 03:15:54.126399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.601 [2024-11-20 03:15:54.126412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:04.601 [2024-11-20 03:15:54.126598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.601 "name": "raid_bdev1", 00:09:04.601 "uuid": "77ac5d5e-714b-4569-84fa-84a0921e2202", 00:09:04.601 "strip_size_kb": 64, 00:09:04.601 "state": "online", 00:09:04.601 "raid_level": "concat", 00:09:04.601 "superblock": true, 00:09:04.601 "num_base_bdevs": 3, 00:09:04.601 "num_base_bdevs_discovered": 3, 00:09:04.601 "num_base_bdevs_operational": 3, 00:09:04.601 "base_bdevs_list": [ 00:09:04.601 { 00:09:04.601 "name": "BaseBdev1", 00:09:04.601 "uuid": "cbc1c869-a511-5aec-92eb-0236d1101ef5", 00:09:04.601 "is_configured": true, 00:09:04.601 "data_offset": 2048, 00:09:04.601 "data_size": 63488 00:09:04.601 }, 00:09:04.601 { 00:09:04.601 "name": "BaseBdev2", 00:09:04.601 "uuid": "013b30b0-ac47-5f74-a75b-aee75766dcce", 00:09:04.601 "is_configured": true, 00:09:04.601 "data_offset": 2048, 00:09:04.601 "data_size": 63488 00:09:04.601 }, 00:09:04.601 { 00:09:04.601 "name": "BaseBdev3", 00:09:04.601 "uuid": "90ab9973-8a26-5901-b16e-b5638b429e58", 00:09:04.601 "is_configured": true, 00:09:04.601 "data_offset": 2048, 00:09:04.601 "data_size": 63488 00:09:04.601 } 00:09:04.601 ] 00:09:04.601 }' 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.601 03:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.169 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.169 03:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:05.169 [2024-11-20 03:15:54.676489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.108 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.108 "name": "raid_bdev1", 00:09:06.108 "uuid": "77ac5d5e-714b-4569-84fa-84a0921e2202", 00:09:06.108 "strip_size_kb": 64, 00:09:06.108 "state": "online", 00:09:06.108 "raid_level": "concat", 00:09:06.108 "superblock": true, 00:09:06.108 "num_base_bdevs": 3, 00:09:06.108 "num_base_bdevs_discovered": 3, 00:09:06.108 "num_base_bdevs_operational": 3, 00:09:06.108 "base_bdevs_list": [ 00:09:06.108 { 00:09:06.108 "name": "BaseBdev1", 00:09:06.108 "uuid": "cbc1c869-a511-5aec-92eb-0236d1101ef5", 00:09:06.108 "is_configured": true, 00:09:06.108 "data_offset": 2048, 00:09:06.108 "data_size": 63488 00:09:06.108 }, 00:09:06.108 { 00:09:06.108 "name": "BaseBdev2", 00:09:06.108 "uuid": "013b30b0-ac47-5f74-a75b-aee75766dcce", 00:09:06.108 "is_configured": true, 00:09:06.108 "data_offset": 2048, 00:09:06.108 "data_size": 63488 00:09:06.108 }, 00:09:06.108 { 00:09:06.108 "name": "BaseBdev3", 00:09:06.108 "uuid": "90ab9973-8a26-5901-b16e-b5638b429e58", 00:09:06.108 "is_configured": true, 00:09:06.108 "data_offset": 2048, 00:09:06.108 "data_size": 63488 00:09:06.108 } 00:09:06.108 ] 00:09:06.109 }' 00:09:06.109 03:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.109 03:15:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.678 03:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.678 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.678 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.678 [2024-11-20 03:15:56.072841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.678 [2024-11-20 03:15:56.072874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.678 [2024-11-20 03:15:56.075742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.678 [2024-11-20 03:15:56.075789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.678 [2024-11-20 03:15:56.075827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.678 [2024-11-20 03:15:56.075840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:06.678 { 00:09:06.678 "results": [ 00:09:06.678 { 00:09:06.678 "job": "raid_bdev1", 00:09:06.678 "core_mask": "0x1", 00:09:06.678 "workload": "randrw", 00:09:06.678 "percentage": 50, 00:09:06.678 "status": "finished", 00:09:06.678 "queue_depth": 1, 00:09:06.678 "io_size": 131072, 00:09:06.678 "runtime": 1.397162, 00:09:06.679 "iops": 15797.022821977695, 00:09:06.679 "mibps": 1974.6278527472118, 00:09:06.679 "io_failed": 1, 00:09:06.679 "io_timeout": 0, 00:09:06.679 "avg_latency_us": 87.99488201376677, 00:09:06.679 "min_latency_us": 26.382532751091702, 00:09:06.679 "max_latency_us": 1430.9170305676855 00:09:06.679 } 00:09:06.679 ], 00:09:06.679 "core_count": 1 00:09:06.679 } 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67113 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67113 ']' 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67113 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67113 00:09:06.679 killing process with pid 67113 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67113' 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67113 00:09:06.679 03:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67113 00:09:06.679 [2024-11-20 03:15:56.119134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.938 [2024-11-20 03:15:56.347672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.878 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U46BLE3zeJ 00:09:07.878 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:07.878 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:08.139 00:09:08.139 real 0m4.551s 00:09:08.139 user 0m5.430s 00:09:08.139 sys 0m0.576s 00:09:08.139 ************************************ 00:09:08.139 END TEST raid_write_error_test 00:09:08.139 ************************************ 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.139 03:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.139 03:15:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:08.139 03:15:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:08.139 03:15:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.139 03:15:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.139 03:15:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.139 ************************************ 00:09:08.139 START TEST raid_state_function_test 00:09:08.139 ************************************ 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67254 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67254' 00:09:08.139 Process raid pid: 67254 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67254 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67254 ']' 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.139 03:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.139 [2024-11-20 03:15:57.684902] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:08.139 [2024-11-20 03:15:57.685122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.401 [2024-11-20 03:15:57.860720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.401 [2024-11-20 03:15:57.974883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.661 [2024-11-20 03:15:58.186970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.661 [2024-11-20 03:15:58.187104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.921 [2024-11-20 03:15:58.516204] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.921 [2024-11-20 03:15:58.516268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.921 [2024-11-20 03:15:58.516280] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.921 [2024-11-20 03:15:58.516290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.921 [2024-11-20 03:15:58.516297] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.921 [2024-11-20 03:15:58.516306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.921 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.181 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.181 "name": "Existed_Raid", 00:09:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.181 "strip_size_kb": 0, 00:09:09.181 "state": "configuring", 00:09:09.181 "raid_level": "raid1", 00:09:09.181 "superblock": false, 00:09:09.181 "num_base_bdevs": 3, 00:09:09.181 "num_base_bdevs_discovered": 0, 00:09:09.181 "num_base_bdevs_operational": 3, 00:09:09.181 "base_bdevs_list": [ 00:09:09.181 { 00:09:09.181 "name": "BaseBdev1", 00:09:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.181 "is_configured": false, 00:09:09.181 "data_offset": 0, 00:09:09.181 "data_size": 0 00:09:09.181 }, 00:09:09.181 { 00:09:09.181 "name": "BaseBdev2", 00:09:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.181 "is_configured": false, 00:09:09.181 "data_offset": 0, 00:09:09.181 "data_size": 0 00:09:09.181 }, 00:09:09.181 { 00:09:09.181 "name": "BaseBdev3", 00:09:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.181 "is_configured": false, 00:09:09.181 "data_offset": 0, 00:09:09.181 "data_size": 0 00:09:09.181 } 00:09:09.181 ] 00:09:09.181 }' 00:09:09.181 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.181 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.441 [2024-11-20 03:15:58.979380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.441 [2024-11-20 03:15:58.979494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.441 [2024-11-20 03:15:58.991341] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.441 [2024-11-20 03:15:58.991393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.441 [2024-11-20 03:15:58.991403] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.441 [2024-11-20 03:15:58.991412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.441 [2024-11-20 03:15:58.991418] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.441 [2024-11-20 03:15:58.991427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.441 03:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.441 [2024-11-20 03:15:59.039082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.441 BaseBdev1 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.441 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.441 [ 00:09:09.441 { 00:09:09.441 "name": "BaseBdev1", 00:09:09.441 "aliases": [ 00:09:09.441 "23129084-89cf-47bb-9f81-e70834f3b6a5" 00:09:09.441 ], 00:09:09.441 "product_name": "Malloc disk", 00:09:09.441 "block_size": 512, 00:09:09.441 "num_blocks": 65536, 00:09:09.441 "uuid": "23129084-89cf-47bb-9f81-e70834f3b6a5", 00:09:09.441 "assigned_rate_limits": { 00:09:09.441 "rw_ios_per_sec": 0, 00:09:09.441 "rw_mbytes_per_sec": 0, 00:09:09.441 "r_mbytes_per_sec": 0, 00:09:09.441 "w_mbytes_per_sec": 0 00:09:09.441 }, 00:09:09.441 "claimed": true, 00:09:09.441 "claim_type": "exclusive_write", 00:09:09.441 "zoned": false, 00:09:09.441 "supported_io_types": { 00:09:09.441 "read": true, 00:09:09.441 "write": true, 00:09:09.441 "unmap": true, 00:09:09.441 "flush": true, 00:09:09.441 "reset": true, 00:09:09.441 "nvme_admin": false, 00:09:09.441 "nvme_io": false, 00:09:09.441 "nvme_io_md": false, 00:09:09.441 "write_zeroes": true, 00:09:09.441 "zcopy": true, 00:09:09.441 "get_zone_info": false, 00:09:09.441 "zone_management": false, 00:09:09.442 "zone_append": false, 00:09:09.442 "compare": false, 00:09:09.442 "compare_and_write": false, 00:09:09.442 "abort": true, 00:09:09.442 "seek_hole": false, 00:09:09.442 "seek_data": false, 00:09:09.700 "copy": true, 00:09:09.700 "nvme_iov_md": false 00:09:09.700 }, 00:09:09.700 "memory_domains": [ 00:09:09.700 { 00:09:09.700 "dma_device_id": "system", 00:09:09.700 "dma_device_type": 1 00:09:09.700 }, 00:09:09.700 { 00:09:09.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.700 "dma_device_type": 2 00:09:09.700 } 00:09:09.700 ], 00:09:09.700 "driver_specific": {} 00:09:09.700 } 00:09:09.700 ] 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.700 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.701 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.701 "name": "Existed_Raid", 00:09:09.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.701 "strip_size_kb": 0, 00:09:09.701 "state": "configuring", 00:09:09.701 "raid_level": "raid1", 00:09:09.701 "superblock": false, 00:09:09.701 "num_base_bdevs": 3, 00:09:09.701 "num_base_bdevs_discovered": 1, 00:09:09.701 "num_base_bdevs_operational": 3, 00:09:09.701 "base_bdevs_list": [ 00:09:09.701 { 00:09:09.701 "name": "BaseBdev1", 00:09:09.701 "uuid": "23129084-89cf-47bb-9f81-e70834f3b6a5", 00:09:09.701 "is_configured": true, 00:09:09.701 "data_offset": 0, 00:09:09.701 "data_size": 65536 00:09:09.701 }, 00:09:09.701 { 00:09:09.701 "name": "BaseBdev2", 00:09:09.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.701 "is_configured": false, 00:09:09.701 "data_offset": 0, 00:09:09.701 "data_size": 0 00:09:09.701 }, 00:09:09.701 { 00:09:09.701 "name": "BaseBdev3", 00:09:09.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.701 "is_configured": false, 00:09:09.701 "data_offset": 0, 00:09:09.701 "data_size": 0 00:09:09.701 } 00:09:09.701 ] 00:09:09.701 }' 00:09:09.701 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.701 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.959 [2024-11-20 03:15:59.510342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.959 [2024-11-20 03:15:59.510462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.959 [2024-11-20 03:15:59.522362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.959 [2024-11-20 03:15:59.524380] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.959 [2024-11-20 03:15:59.524466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.959 [2024-11-20 03:15:59.524502] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.959 [2024-11-20 03:15:59.524542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.959 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.959 "name": "Existed_Raid", 00:09:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.959 "strip_size_kb": 0, 00:09:09.959 "state": "configuring", 00:09:09.959 "raid_level": "raid1", 00:09:09.959 "superblock": false, 00:09:09.959 "num_base_bdevs": 3, 00:09:09.959 "num_base_bdevs_discovered": 1, 00:09:09.960 "num_base_bdevs_operational": 3, 00:09:09.960 "base_bdevs_list": [ 00:09:09.960 { 00:09:09.960 "name": "BaseBdev1", 00:09:09.960 "uuid": "23129084-89cf-47bb-9f81-e70834f3b6a5", 00:09:09.960 "is_configured": true, 00:09:09.960 "data_offset": 0, 00:09:09.960 "data_size": 65536 00:09:09.960 }, 00:09:09.960 { 00:09:09.960 "name": "BaseBdev2", 00:09:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.960 "is_configured": false, 00:09:09.960 "data_offset": 0, 00:09:09.960 "data_size": 0 00:09:09.960 }, 00:09:09.960 { 00:09:09.960 "name": "BaseBdev3", 00:09:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.960 "is_configured": false, 00:09:09.960 "data_offset": 0, 00:09:09.960 "data_size": 0 00:09:09.960 } 00:09:09.960 ] 00:09:09.960 }' 00:09:09.960 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.960 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.528 [2024-11-20 03:15:59.977484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.528 BaseBdev2 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.528 03:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.528 [ 00:09:10.528 { 00:09:10.528 "name": "BaseBdev2", 00:09:10.528 "aliases": [ 00:09:10.528 "edf2ee04-ed93-49c2-87f4-47e63d534062" 00:09:10.528 ], 00:09:10.528 "product_name": "Malloc disk", 00:09:10.528 "block_size": 512, 00:09:10.528 "num_blocks": 65536, 00:09:10.529 "uuid": "edf2ee04-ed93-49c2-87f4-47e63d534062", 00:09:10.529 "assigned_rate_limits": { 00:09:10.529 "rw_ios_per_sec": 0, 00:09:10.529 "rw_mbytes_per_sec": 0, 00:09:10.529 "r_mbytes_per_sec": 0, 00:09:10.529 "w_mbytes_per_sec": 0 00:09:10.529 }, 00:09:10.529 "claimed": true, 00:09:10.529 "claim_type": "exclusive_write", 00:09:10.529 "zoned": false, 00:09:10.529 "supported_io_types": { 00:09:10.529 "read": true, 00:09:10.529 "write": true, 00:09:10.529 "unmap": true, 00:09:10.529 "flush": true, 00:09:10.529 "reset": true, 00:09:10.529 "nvme_admin": false, 00:09:10.529 "nvme_io": false, 00:09:10.529 "nvme_io_md": false, 00:09:10.529 "write_zeroes": true, 00:09:10.529 "zcopy": true, 00:09:10.529 "get_zone_info": false, 00:09:10.529 "zone_management": false, 00:09:10.529 "zone_append": false, 00:09:10.529 "compare": false, 00:09:10.529 "compare_and_write": false, 00:09:10.529 "abort": true, 00:09:10.529 "seek_hole": false, 00:09:10.529 "seek_data": false, 00:09:10.529 "copy": true, 00:09:10.529 "nvme_iov_md": false 00:09:10.529 }, 00:09:10.529 "memory_domains": [ 00:09:10.529 { 00:09:10.529 "dma_device_id": "system", 00:09:10.529 "dma_device_type": 1 00:09:10.529 }, 00:09:10.529 { 00:09:10.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.529 "dma_device_type": 2 00:09:10.529 } 00:09:10.529 ], 00:09:10.529 "driver_specific": {} 00:09:10.529 } 00:09:10.529 ] 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.529 "name": "Existed_Raid", 00:09:10.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.529 "strip_size_kb": 0, 00:09:10.529 "state": "configuring", 00:09:10.529 "raid_level": "raid1", 00:09:10.529 "superblock": false, 00:09:10.529 "num_base_bdevs": 3, 00:09:10.529 "num_base_bdevs_discovered": 2, 00:09:10.529 "num_base_bdevs_operational": 3, 00:09:10.529 "base_bdevs_list": [ 00:09:10.529 { 00:09:10.529 "name": "BaseBdev1", 00:09:10.529 "uuid": "23129084-89cf-47bb-9f81-e70834f3b6a5", 00:09:10.529 "is_configured": true, 00:09:10.529 "data_offset": 0, 00:09:10.529 "data_size": 65536 00:09:10.529 }, 00:09:10.529 { 00:09:10.529 "name": "BaseBdev2", 00:09:10.529 "uuid": "edf2ee04-ed93-49c2-87f4-47e63d534062", 00:09:10.529 "is_configured": true, 00:09:10.529 "data_offset": 0, 00:09:10.529 "data_size": 65536 00:09:10.529 }, 00:09:10.529 { 00:09:10.529 "name": "BaseBdev3", 00:09:10.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.529 "is_configured": false, 00:09:10.529 "data_offset": 0, 00:09:10.529 "data_size": 0 00:09:10.529 } 00:09:10.529 ] 00:09:10.529 }' 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.529 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.099 [2024-11-20 03:16:00.513026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.099 [2024-11-20 03:16:00.513161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.099 [2024-11-20 03:16:00.513180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:11.099 [2024-11-20 03:16:00.513503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.099 [2024-11-20 03:16:00.513721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.099 [2024-11-20 03:16:00.513733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.099 [2024-11-20 03:16:00.514012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.099 BaseBdev3 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.099 [ 00:09:11.099 { 00:09:11.099 "name": "BaseBdev3", 00:09:11.099 "aliases": [ 00:09:11.099 "0f1ef776-9b08-4b59-bcf5-fc6ff7c26d59" 00:09:11.099 ], 00:09:11.099 "product_name": "Malloc disk", 00:09:11.099 "block_size": 512, 00:09:11.099 "num_blocks": 65536, 00:09:11.099 "uuid": "0f1ef776-9b08-4b59-bcf5-fc6ff7c26d59", 00:09:11.099 "assigned_rate_limits": { 00:09:11.099 "rw_ios_per_sec": 0, 00:09:11.099 "rw_mbytes_per_sec": 0, 00:09:11.099 "r_mbytes_per_sec": 0, 00:09:11.099 "w_mbytes_per_sec": 0 00:09:11.099 }, 00:09:11.099 "claimed": true, 00:09:11.099 "claim_type": "exclusive_write", 00:09:11.099 "zoned": false, 00:09:11.099 "supported_io_types": { 00:09:11.099 "read": true, 00:09:11.099 "write": true, 00:09:11.099 "unmap": true, 00:09:11.099 "flush": true, 00:09:11.099 "reset": true, 00:09:11.099 "nvme_admin": false, 00:09:11.099 "nvme_io": false, 00:09:11.099 "nvme_io_md": false, 00:09:11.099 "write_zeroes": true, 00:09:11.099 "zcopy": true, 00:09:11.099 "get_zone_info": false, 00:09:11.099 "zone_management": false, 00:09:11.099 "zone_append": false, 00:09:11.099 "compare": false, 00:09:11.099 "compare_and_write": false, 00:09:11.099 "abort": true, 00:09:11.099 "seek_hole": false, 00:09:11.099 "seek_data": false, 00:09:11.099 "copy": true, 00:09:11.099 "nvme_iov_md": false 00:09:11.099 }, 00:09:11.099 "memory_domains": [ 00:09:11.099 { 00:09:11.099 "dma_device_id": "system", 00:09:11.099 "dma_device_type": 1 00:09:11.099 }, 00:09:11.099 { 00:09:11.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.099 "dma_device_type": 2 00:09:11.099 } 00:09:11.099 ], 00:09:11.099 "driver_specific": {} 00:09:11.099 } 00:09:11.099 ] 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.099 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.100 "name": "Existed_Raid", 00:09:11.100 "uuid": "9c52f8d5-9fb3-4499-9fbd-c81e67a3423d", 00:09:11.100 "strip_size_kb": 0, 00:09:11.100 "state": "online", 00:09:11.100 "raid_level": "raid1", 00:09:11.100 "superblock": false, 00:09:11.100 "num_base_bdevs": 3, 00:09:11.100 "num_base_bdevs_discovered": 3, 00:09:11.100 "num_base_bdevs_operational": 3, 00:09:11.100 "base_bdevs_list": [ 00:09:11.100 { 00:09:11.100 "name": "BaseBdev1", 00:09:11.100 "uuid": "23129084-89cf-47bb-9f81-e70834f3b6a5", 00:09:11.100 "is_configured": true, 00:09:11.100 "data_offset": 0, 00:09:11.100 "data_size": 65536 00:09:11.100 }, 00:09:11.100 { 00:09:11.100 "name": "BaseBdev2", 00:09:11.100 "uuid": "edf2ee04-ed93-49c2-87f4-47e63d534062", 00:09:11.100 "is_configured": true, 00:09:11.100 "data_offset": 0, 00:09:11.100 "data_size": 65536 00:09:11.100 }, 00:09:11.100 { 00:09:11.100 "name": "BaseBdev3", 00:09:11.100 "uuid": "0f1ef776-9b08-4b59-bcf5-fc6ff7c26d59", 00:09:11.100 "is_configured": true, 00:09:11.100 "data_offset": 0, 00:09:11.100 "data_size": 65536 00:09:11.100 } 00:09:11.100 ] 00:09:11.100 }' 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.100 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.360 [2024-11-20 03:16:00.908735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.360 "name": "Existed_Raid", 00:09:11.360 "aliases": [ 00:09:11.360 "9c52f8d5-9fb3-4499-9fbd-c81e67a3423d" 00:09:11.360 ], 00:09:11.360 "product_name": "Raid Volume", 00:09:11.360 "block_size": 512, 00:09:11.360 "num_blocks": 65536, 00:09:11.360 "uuid": "9c52f8d5-9fb3-4499-9fbd-c81e67a3423d", 00:09:11.360 "assigned_rate_limits": { 00:09:11.360 "rw_ios_per_sec": 0, 00:09:11.360 "rw_mbytes_per_sec": 0, 00:09:11.360 "r_mbytes_per_sec": 0, 00:09:11.360 "w_mbytes_per_sec": 0 00:09:11.360 }, 00:09:11.360 "claimed": false, 00:09:11.360 "zoned": false, 00:09:11.360 "supported_io_types": { 00:09:11.360 "read": true, 00:09:11.360 "write": true, 00:09:11.360 "unmap": false, 00:09:11.360 "flush": false, 00:09:11.360 "reset": true, 00:09:11.360 "nvme_admin": false, 00:09:11.360 "nvme_io": false, 00:09:11.360 "nvme_io_md": false, 00:09:11.360 "write_zeroes": true, 00:09:11.360 "zcopy": false, 00:09:11.360 "get_zone_info": false, 00:09:11.360 "zone_management": false, 00:09:11.360 "zone_append": false, 00:09:11.360 "compare": false, 00:09:11.360 "compare_and_write": false, 00:09:11.360 "abort": false, 00:09:11.360 "seek_hole": false, 00:09:11.360 "seek_data": false, 00:09:11.360 "copy": false, 00:09:11.360 "nvme_iov_md": false 00:09:11.360 }, 00:09:11.360 "memory_domains": [ 00:09:11.360 { 00:09:11.360 "dma_device_id": "system", 00:09:11.360 "dma_device_type": 1 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.360 "dma_device_type": 2 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "dma_device_id": "system", 00:09:11.360 "dma_device_type": 1 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.360 "dma_device_type": 2 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "dma_device_id": "system", 00:09:11.360 "dma_device_type": 1 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.360 "dma_device_type": 2 00:09:11.360 } 00:09:11.360 ], 00:09:11.360 "driver_specific": { 00:09:11.360 "raid": { 00:09:11.360 "uuid": "9c52f8d5-9fb3-4499-9fbd-c81e67a3423d", 00:09:11.360 "strip_size_kb": 0, 00:09:11.360 "state": "online", 00:09:11.360 "raid_level": "raid1", 00:09:11.360 "superblock": false, 00:09:11.360 "num_base_bdevs": 3, 00:09:11.360 "num_base_bdevs_discovered": 3, 00:09:11.360 "num_base_bdevs_operational": 3, 00:09:11.360 "base_bdevs_list": [ 00:09:11.360 { 00:09:11.360 "name": "BaseBdev1", 00:09:11.360 "uuid": "23129084-89cf-47bb-9f81-e70834f3b6a5", 00:09:11.360 "is_configured": true, 00:09:11.360 "data_offset": 0, 00:09:11.360 "data_size": 65536 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "name": "BaseBdev2", 00:09:11.360 "uuid": "edf2ee04-ed93-49c2-87f4-47e63d534062", 00:09:11.360 "is_configured": true, 00:09:11.360 "data_offset": 0, 00:09:11.360 "data_size": 65536 00:09:11.360 }, 00:09:11.360 { 00:09:11.360 "name": "BaseBdev3", 00:09:11.360 "uuid": "0f1ef776-9b08-4b59-bcf5-fc6ff7c26d59", 00:09:11.360 "is_configured": true, 00:09:11.360 "data_offset": 0, 00:09:11.360 "data_size": 65536 00:09:11.360 } 00:09:11.360 ] 00:09:11.360 } 00:09:11.360 } 00:09:11.360 }' 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.360 BaseBdev2 00:09:11.360 BaseBdev3' 00:09:11.360 03:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.619 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.620 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.620 [2024-11-20 03:16:01.192003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.879 "name": "Existed_Raid", 00:09:11.879 "uuid": "9c52f8d5-9fb3-4499-9fbd-c81e67a3423d", 00:09:11.879 "strip_size_kb": 0, 00:09:11.879 "state": "online", 00:09:11.879 "raid_level": "raid1", 00:09:11.879 "superblock": false, 00:09:11.879 "num_base_bdevs": 3, 00:09:11.879 "num_base_bdevs_discovered": 2, 00:09:11.879 "num_base_bdevs_operational": 2, 00:09:11.879 "base_bdevs_list": [ 00:09:11.879 { 00:09:11.879 "name": null, 00:09:11.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.879 "is_configured": false, 00:09:11.879 "data_offset": 0, 00:09:11.879 "data_size": 65536 00:09:11.879 }, 00:09:11.879 { 00:09:11.879 "name": "BaseBdev2", 00:09:11.879 "uuid": "edf2ee04-ed93-49c2-87f4-47e63d534062", 00:09:11.879 "is_configured": true, 00:09:11.879 "data_offset": 0, 00:09:11.879 "data_size": 65536 00:09:11.879 }, 00:09:11.879 { 00:09:11.879 "name": "BaseBdev3", 00:09:11.879 "uuid": "0f1ef776-9b08-4b59-bcf5-fc6ff7c26d59", 00:09:11.879 "is_configured": true, 00:09:11.879 "data_offset": 0, 00:09:11.879 "data_size": 65536 00:09:11.879 } 00:09:11.879 ] 00:09:11.879 }' 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.879 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.138 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.138 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.138 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.138 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.138 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.138 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.395 [2024-11-20 03:16:01.818376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.395 03:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.395 [2024-11-20 03:16:01.971568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.395 [2024-11-20 03:16:01.971748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.654 [2024-11-20 03:16:02.068336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.655 [2024-11-20 03:16:02.068476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.655 [2024-11-20 03:16:02.068519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 BaseBdev2 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 [ 00:09:12.655 { 00:09:12.655 "name": "BaseBdev2", 00:09:12.655 "aliases": [ 00:09:12.655 "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4" 00:09:12.655 ], 00:09:12.655 "product_name": "Malloc disk", 00:09:12.655 "block_size": 512, 00:09:12.655 "num_blocks": 65536, 00:09:12.655 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:12.655 "assigned_rate_limits": { 00:09:12.655 "rw_ios_per_sec": 0, 00:09:12.655 "rw_mbytes_per_sec": 0, 00:09:12.655 "r_mbytes_per_sec": 0, 00:09:12.655 "w_mbytes_per_sec": 0 00:09:12.655 }, 00:09:12.655 "claimed": false, 00:09:12.655 "zoned": false, 00:09:12.655 "supported_io_types": { 00:09:12.655 "read": true, 00:09:12.655 "write": true, 00:09:12.655 "unmap": true, 00:09:12.655 "flush": true, 00:09:12.655 "reset": true, 00:09:12.655 "nvme_admin": false, 00:09:12.655 "nvme_io": false, 00:09:12.655 "nvme_io_md": false, 00:09:12.655 "write_zeroes": true, 00:09:12.655 "zcopy": true, 00:09:12.655 "get_zone_info": false, 00:09:12.655 "zone_management": false, 00:09:12.655 "zone_append": false, 00:09:12.655 "compare": false, 00:09:12.655 "compare_and_write": false, 00:09:12.655 "abort": true, 00:09:12.655 "seek_hole": false, 00:09:12.655 "seek_data": false, 00:09:12.655 "copy": true, 00:09:12.655 "nvme_iov_md": false 00:09:12.655 }, 00:09:12.655 "memory_domains": [ 00:09:12.655 { 00:09:12.655 "dma_device_id": "system", 00:09:12.655 "dma_device_type": 1 00:09:12.655 }, 00:09:12.655 { 00:09:12.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.655 "dma_device_type": 2 00:09:12.655 } 00:09:12.655 ], 00:09:12.655 "driver_specific": {} 00:09:12.655 } 00:09:12.655 ] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 BaseBdev3 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.655 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 [ 00:09:12.655 { 00:09:12.655 "name": "BaseBdev3", 00:09:12.655 "aliases": [ 00:09:12.655 "24825a9c-4809-435b-9295-6db1a95dcaad" 00:09:12.655 ], 00:09:12.655 "product_name": "Malloc disk", 00:09:12.655 "block_size": 512, 00:09:12.655 "num_blocks": 65536, 00:09:12.655 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:12.655 "assigned_rate_limits": { 00:09:12.655 "rw_ios_per_sec": 0, 00:09:12.655 "rw_mbytes_per_sec": 0, 00:09:12.655 "r_mbytes_per_sec": 0, 00:09:12.655 "w_mbytes_per_sec": 0 00:09:12.655 }, 00:09:12.655 "claimed": false, 00:09:12.655 "zoned": false, 00:09:12.655 "supported_io_types": { 00:09:12.655 "read": true, 00:09:12.655 "write": true, 00:09:12.655 "unmap": true, 00:09:12.655 "flush": true, 00:09:12.655 "reset": true, 00:09:12.655 "nvme_admin": false, 00:09:12.655 "nvme_io": false, 00:09:12.655 "nvme_io_md": false, 00:09:12.655 "write_zeroes": true, 00:09:12.655 "zcopy": true, 00:09:12.655 "get_zone_info": false, 00:09:12.655 "zone_management": false, 00:09:12.655 "zone_append": false, 00:09:12.655 "compare": false, 00:09:12.655 "compare_and_write": false, 00:09:12.655 "abort": true, 00:09:12.655 "seek_hole": false, 00:09:12.655 "seek_data": false, 00:09:12.655 "copy": true, 00:09:12.655 "nvme_iov_md": false 00:09:12.656 }, 00:09:12.656 "memory_domains": [ 00:09:12.656 { 00:09:12.656 "dma_device_id": "system", 00:09:12.656 "dma_device_type": 1 00:09:12.656 }, 00:09:12.656 { 00:09:12.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.656 "dma_device_type": 2 00:09:12.656 } 00:09:12.656 ], 00:09:12.656 "driver_specific": {} 00:09:12.656 } 00:09:12.656 ] 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.656 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.915 [2024-11-20 03:16:02.291227] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.915 [2024-11-20 03:16:02.291335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.915 [2024-11-20 03:16:02.291396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.915 [2024-11-20 03:16:02.293313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.915 "name": "Existed_Raid", 00:09:12.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.915 "strip_size_kb": 0, 00:09:12.915 "state": "configuring", 00:09:12.915 "raid_level": "raid1", 00:09:12.915 "superblock": false, 00:09:12.915 "num_base_bdevs": 3, 00:09:12.915 "num_base_bdevs_discovered": 2, 00:09:12.915 "num_base_bdevs_operational": 3, 00:09:12.915 "base_bdevs_list": [ 00:09:12.915 { 00:09:12.915 "name": "BaseBdev1", 00:09:12.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.915 "is_configured": false, 00:09:12.915 "data_offset": 0, 00:09:12.915 "data_size": 0 00:09:12.915 }, 00:09:12.915 { 00:09:12.915 "name": "BaseBdev2", 00:09:12.915 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:12.915 "is_configured": true, 00:09:12.915 "data_offset": 0, 00:09:12.915 "data_size": 65536 00:09:12.915 }, 00:09:12.915 { 00:09:12.915 "name": "BaseBdev3", 00:09:12.915 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:12.915 "is_configured": true, 00:09:12.915 "data_offset": 0, 00:09:12.915 "data_size": 65536 00:09:12.915 } 00:09:12.915 ] 00:09:12.915 }' 00:09:12.915 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.916 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.175 [2024-11-20 03:16:02.746566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.175 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.176 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.176 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.176 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.176 "name": "Existed_Raid", 00:09:13.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.176 "strip_size_kb": 0, 00:09:13.176 "state": "configuring", 00:09:13.176 "raid_level": "raid1", 00:09:13.176 "superblock": false, 00:09:13.176 "num_base_bdevs": 3, 00:09:13.176 "num_base_bdevs_discovered": 1, 00:09:13.176 "num_base_bdevs_operational": 3, 00:09:13.176 "base_bdevs_list": [ 00:09:13.176 { 00:09:13.176 "name": "BaseBdev1", 00:09:13.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.176 "is_configured": false, 00:09:13.176 "data_offset": 0, 00:09:13.176 "data_size": 0 00:09:13.176 }, 00:09:13.176 { 00:09:13.176 "name": null, 00:09:13.176 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:13.176 "is_configured": false, 00:09:13.176 "data_offset": 0, 00:09:13.176 "data_size": 65536 00:09:13.176 }, 00:09:13.176 { 00:09:13.176 "name": "BaseBdev3", 00:09:13.176 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:13.176 "is_configured": true, 00:09:13.176 "data_offset": 0, 00:09:13.176 "data_size": 65536 00:09:13.176 } 00:09:13.176 ] 00:09:13.176 }' 00:09:13.176 03:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.176 03:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.744 [2024-11-20 03:16:03.338501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.744 BaseBdev1 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.744 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.744 [ 00:09:13.744 { 00:09:13.744 "name": "BaseBdev1", 00:09:13.744 "aliases": [ 00:09:13.744 "990038ad-8754-45c0-a2b3-feec157cc811" 00:09:13.744 ], 00:09:13.744 "product_name": "Malloc disk", 00:09:13.744 "block_size": 512, 00:09:13.745 "num_blocks": 65536, 00:09:13.745 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:13.745 "assigned_rate_limits": { 00:09:13.745 "rw_ios_per_sec": 0, 00:09:13.745 "rw_mbytes_per_sec": 0, 00:09:13.745 "r_mbytes_per_sec": 0, 00:09:13.745 "w_mbytes_per_sec": 0 00:09:13.745 }, 00:09:13.745 "claimed": true, 00:09:13.745 "claim_type": "exclusive_write", 00:09:13.745 "zoned": false, 00:09:13.745 "supported_io_types": { 00:09:13.745 "read": true, 00:09:13.745 "write": true, 00:09:13.745 "unmap": true, 00:09:13.745 "flush": true, 00:09:13.745 "reset": true, 00:09:13.745 "nvme_admin": false, 00:09:13.745 "nvme_io": false, 00:09:13.745 "nvme_io_md": false, 00:09:13.745 "write_zeroes": true, 00:09:13.745 "zcopy": true, 00:09:13.745 "get_zone_info": false, 00:09:13.745 "zone_management": false, 00:09:13.745 "zone_append": false, 00:09:13.745 "compare": false, 00:09:13.745 "compare_and_write": false, 00:09:13.745 "abort": true, 00:09:13.745 "seek_hole": false, 00:09:13.745 "seek_data": false, 00:09:13.745 "copy": true, 00:09:13.745 "nvme_iov_md": false 00:09:13.745 }, 00:09:13.745 "memory_domains": [ 00:09:13.745 { 00:09:13.745 "dma_device_id": "system", 00:09:14.004 "dma_device_type": 1 00:09:14.004 }, 00:09:14.004 { 00:09:14.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.004 "dma_device_type": 2 00:09:14.004 } 00:09:14.004 ], 00:09:14.004 "driver_specific": {} 00:09:14.004 } 00:09:14.004 ] 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.004 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.004 "name": "Existed_Raid", 00:09:14.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.004 "strip_size_kb": 0, 00:09:14.004 "state": "configuring", 00:09:14.004 "raid_level": "raid1", 00:09:14.004 "superblock": false, 00:09:14.004 "num_base_bdevs": 3, 00:09:14.004 "num_base_bdevs_discovered": 2, 00:09:14.004 "num_base_bdevs_operational": 3, 00:09:14.004 "base_bdevs_list": [ 00:09:14.004 { 00:09:14.004 "name": "BaseBdev1", 00:09:14.004 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:14.004 "is_configured": true, 00:09:14.004 "data_offset": 0, 00:09:14.004 "data_size": 65536 00:09:14.004 }, 00:09:14.004 { 00:09:14.004 "name": null, 00:09:14.004 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:14.004 "is_configured": false, 00:09:14.004 "data_offset": 0, 00:09:14.004 "data_size": 65536 00:09:14.004 }, 00:09:14.004 { 00:09:14.004 "name": "BaseBdev3", 00:09:14.004 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:14.004 "is_configured": true, 00:09:14.005 "data_offset": 0, 00:09:14.005 "data_size": 65536 00:09:14.005 } 00:09:14.005 ] 00:09:14.005 }' 00:09:14.005 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.005 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.264 [2024-11-20 03:16:03.869661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.264 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.524 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.524 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.524 "name": "Existed_Raid", 00:09:14.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.524 "strip_size_kb": 0, 00:09:14.524 "state": "configuring", 00:09:14.524 "raid_level": "raid1", 00:09:14.524 "superblock": false, 00:09:14.524 "num_base_bdevs": 3, 00:09:14.524 "num_base_bdevs_discovered": 1, 00:09:14.524 "num_base_bdevs_operational": 3, 00:09:14.524 "base_bdevs_list": [ 00:09:14.524 { 00:09:14.524 "name": "BaseBdev1", 00:09:14.524 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:14.524 "is_configured": true, 00:09:14.524 "data_offset": 0, 00:09:14.524 "data_size": 65536 00:09:14.524 }, 00:09:14.524 { 00:09:14.524 "name": null, 00:09:14.524 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:14.524 "is_configured": false, 00:09:14.524 "data_offset": 0, 00:09:14.524 "data_size": 65536 00:09:14.524 }, 00:09:14.524 { 00:09:14.524 "name": null, 00:09:14.524 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:14.524 "is_configured": false, 00:09:14.524 "data_offset": 0, 00:09:14.524 "data_size": 65536 00:09:14.524 } 00:09:14.524 ] 00:09:14.524 }' 00:09:14.524 03:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.524 03:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.783 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.783 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.784 [2024-11-20 03:16:04.340874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.784 "name": "Existed_Raid", 00:09:14.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.784 "strip_size_kb": 0, 00:09:14.784 "state": "configuring", 00:09:14.784 "raid_level": "raid1", 00:09:14.784 "superblock": false, 00:09:14.784 "num_base_bdevs": 3, 00:09:14.784 "num_base_bdevs_discovered": 2, 00:09:14.784 "num_base_bdevs_operational": 3, 00:09:14.784 "base_bdevs_list": [ 00:09:14.784 { 00:09:14.784 "name": "BaseBdev1", 00:09:14.784 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:14.784 "is_configured": true, 00:09:14.784 "data_offset": 0, 00:09:14.784 "data_size": 65536 00:09:14.784 }, 00:09:14.784 { 00:09:14.784 "name": null, 00:09:14.784 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:14.784 "is_configured": false, 00:09:14.784 "data_offset": 0, 00:09:14.784 "data_size": 65536 00:09:14.784 }, 00:09:14.784 { 00:09:14.784 "name": "BaseBdev3", 00:09:14.784 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:14.784 "is_configured": true, 00:09:14.784 "data_offset": 0, 00:09:14.784 "data_size": 65536 00:09:14.784 } 00:09:14.784 ] 00:09:14.784 }' 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.784 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.353 [2024-11-20 03:16:04.812082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.353 "name": "Existed_Raid", 00:09:15.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.353 "strip_size_kb": 0, 00:09:15.353 "state": "configuring", 00:09:15.353 "raid_level": "raid1", 00:09:15.353 "superblock": false, 00:09:15.353 "num_base_bdevs": 3, 00:09:15.353 "num_base_bdevs_discovered": 1, 00:09:15.353 "num_base_bdevs_operational": 3, 00:09:15.353 "base_bdevs_list": [ 00:09:15.353 { 00:09:15.353 "name": null, 00:09:15.353 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:15.353 "is_configured": false, 00:09:15.353 "data_offset": 0, 00:09:15.353 "data_size": 65536 00:09:15.353 }, 00:09:15.353 { 00:09:15.353 "name": null, 00:09:15.353 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:15.353 "is_configured": false, 00:09:15.353 "data_offset": 0, 00:09:15.353 "data_size": 65536 00:09:15.353 }, 00:09:15.353 { 00:09:15.353 "name": "BaseBdev3", 00:09:15.353 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:15.353 "is_configured": true, 00:09:15.353 "data_offset": 0, 00:09:15.353 "data_size": 65536 00:09:15.353 } 00:09:15.353 ] 00:09:15.353 }' 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.353 03:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.922 [2024-11-20 03:16:05.392115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.922 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.922 "name": "Existed_Raid", 00:09:15.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.922 "strip_size_kb": 0, 00:09:15.922 "state": "configuring", 00:09:15.922 "raid_level": "raid1", 00:09:15.922 "superblock": false, 00:09:15.922 "num_base_bdevs": 3, 00:09:15.922 "num_base_bdevs_discovered": 2, 00:09:15.922 "num_base_bdevs_operational": 3, 00:09:15.922 "base_bdevs_list": [ 00:09:15.922 { 00:09:15.922 "name": null, 00:09:15.922 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:15.922 "is_configured": false, 00:09:15.922 "data_offset": 0, 00:09:15.923 "data_size": 65536 00:09:15.923 }, 00:09:15.923 { 00:09:15.923 "name": "BaseBdev2", 00:09:15.923 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:15.923 "is_configured": true, 00:09:15.923 "data_offset": 0, 00:09:15.923 "data_size": 65536 00:09:15.923 }, 00:09:15.923 { 00:09:15.923 "name": "BaseBdev3", 00:09:15.923 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:15.923 "is_configured": true, 00:09:15.923 "data_offset": 0, 00:09:15.923 "data_size": 65536 00:09:15.923 } 00:09:15.923 ] 00:09:15.923 }' 00:09:15.923 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.923 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 990038ad-8754-45c0-a2b3-feec157cc811 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 [2024-11-20 03:16:05.960381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.493 [2024-11-20 03:16:05.960533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:16.493 [2024-11-20 03:16:05.960559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:16.493 [2024-11-20 03:16:05.960873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:16.493 [2024-11-20 03:16:05.961094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:16.493 [2024-11-20 03:16:05.961141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:16.493 [2024-11-20 03:16:05.961448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.493 NewBaseBdev 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 03:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 [ 00:09:16.493 { 00:09:16.493 "name": "NewBaseBdev", 00:09:16.493 "aliases": [ 00:09:16.493 "990038ad-8754-45c0-a2b3-feec157cc811" 00:09:16.493 ], 00:09:16.493 "product_name": "Malloc disk", 00:09:16.493 "block_size": 512, 00:09:16.493 "num_blocks": 65536, 00:09:16.493 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:16.493 "assigned_rate_limits": { 00:09:16.493 "rw_ios_per_sec": 0, 00:09:16.493 "rw_mbytes_per_sec": 0, 00:09:16.493 "r_mbytes_per_sec": 0, 00:09:16.493 "w_mbytes_per_sec": 0 00:09:16.493 }, 00:09:16.493 "claimed": true, 00:09:16.493 "claim_type": "exclusive_write", 00:09:16.493 "zoned": false, 00:09:16.493 "supported_io_types": { 00:09:16.493 "read": true, 00:09:16.493 "write": true, 00:09:16.493 "unmap": true, 00:09:16.493 "flush": true, 00:09:16.493 "reset": true, 00:09:16.493 "nvme_admin": false, 00:09:16.493 "nvme_io": false, 00:09:16.493 "nvme_io_md": false, 00:09:16.493 "write_zeroes": true, 00:09:16.493 "zcopy": true, 00:09:16.493 "get_zone_info": false, 00:09:16.493 "zone_management": false, 00:09:16.493 "zone_append": false, 00:09:16.493 "compare": false, 00:09:16.493 "compare_and_write": false, 00:09:16.493 "abort": true, 00:09:16.493 "seek_hole": false, 00:09:16.493 "seek_data": false, 00:09:16.493 "copy": true, 00:09:16.493 "nvme_iov_md": false 00:09:16.493 }, 00:09:16.493 "memory_domains": [ 00:09:16.493 { 00:09:16.493 "dma_device_id": "system", 00:09:16.493 "dma_device_type": 1 00:09:16.493 }, 00:09:16.493 { 00:09:16.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.493 "dma_device_type": 2 00:09:16.493 } 00:09:16.493 ], 00:09:16.493 "driver_specific": {} 00:09:16.493 } 00:09:16.493 ] 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.493 "name": "Existed_Raid", 00:09:16.493 "uuid": "2b3df274-f064-42f7-a286-8d67c882d46c", 00:09:16.493 "strip_size_kb": 0, 00:09:16.493 "state": "online", 00:09:16.493 "raid_level": "raid1", 00:09:16.493 "superblock": false, 00:09:16.493 "num_base_bdevs": 3, 00:09:16.493 "num_base_bdevs_discovered": 3, 00:09:16.493 "num_base_bdevs_operational": 3, 00:09:16.493 "base_bdevs_list": [ 00:09:16.493 { 00:09:16.493 "name": "NewBaseBdev", 00:09:16.493 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:16.493 "is_configured": true, 00:09:16.493 "data_offset": 0, 00:09:16.493 "data_size": 65536 00:09:16.493 }, 00:09:16.493 { 00:09:16.493 "name": "BaseBdev2", 00:09:16.493 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:16.493 "is_configured": true, 00:09:16.493 "data_offset": 0, 00:09:16.493 "data_size": 65536 00:09:16.493 }, 00:09:16.493 { 00:09:16.493 "name": "BaseBdev3", 00:09:16.493 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:16.493 "is_configured": true, 00:09:16.493 "data_offset": 0, 00:09:16.493 "data_size": 65536 00:09:16.493 } 00:09:16.493 ] 00:09:16.493 }' 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.493 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.064 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.065 [2024-11-20 03:16:06.471932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.065 "name": "Existed_Raid", 00:09:17.065 "aliases": [ 00:09:17.065 "2b3df274-f064-42f7-a286-8d67c882d46c" 00:09:17.065 ], 00:09:17.065 "product_name": "Raid Volume", 00:09:17.065 "block_size": 512, 00:09:17.065 "num_blocks": 65536, 00:09:17.065 "uuid": "2b3df274-f064-42f7-a286-8d67c882d46c", 00:09:17.065 "assigned_rate_limits": { 00:09:17.065 "rw_ios_per_sec": 0, 00:09:17.065 "rw_mbytes_per_sec": 0, 00:09:17.065 "r_mbytes_per_sec": 0, 00:09:17.065 "w_mbytes_per_sec": 0 00:09:17.065 }, 00:09:17.065 "claimed": false, 00:09:17.065 "zoned": false, 00:09:17.065 "supported_io_types": { 00:09:17.065 "read": true, 00:09:17.065 "write": true, 00:09:17.065 "unmap": false, 00:09:17.065 "flush": false, 00:09:17.065 "reset": true, 00:09:17.065 "nvme_admin": false, 00:09:17.065 "nvme_io": false, 00:09:17.065 "nvme_io_md": false, 00:09:17.065 "write_zeroes": true, 00:09:17.065 "zcopy": false, 00:09:17.065 "get_zone_info": false, 00:09:17.065 "zone_management": false, 00:09:17.065 "zone_append": false, 00:09:17.065 "compare": false, 00:09:17.065 "compare_and_write": false, 00:09:17.065 "abort": false, 00:09:17.065 "seek_hole": false, 00:09:17.065 "seek_data": false, 00:09:17.065 "copy": false, 00:09:17.065 "nvme_iov_md": false 00:09:17.065 }, 00:09:17.065 "memory_domains": [ 00:09:17.065 { 00:09:17.065 "dma_device_id": "system", 00:09:17.065 "dma_device_type": 1 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.065 "dma_device_type": 2 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "dma_device_id": "system", 00:09:17.065 "dma_device_type": 1 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.065 "dma_device_type": 2 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "dma_device_id": "system", 00:09:17.065 "dma_device_type": 1 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.065 "dma_device_type": 2 00:09:17.065 } 00:09:17.065 ], 00:09:17.065 "driver_specific": { 00:09:17.065 "raid": { 00:09:17.065 "uuid": "2b3df274-f064-42f7-a286-8d67c882d46c", 00:09:17.065 "strip_size_kb": 0, 00:09:17.065 "state": "online", 00:09:17.065 "raid_level": "raid1", 00:09:17.065 "superblock": false, 00:09:17.065 "num_base_bdevs": 3, 00:09:17.065 "num_base_bdevs_discovered": 3, 00:09:17.065 "num_base_bdevs_operational": 3, 00:09:17.065 "base_bdevs_list": [ 00:09:17.065 { 00:09:17.065 "name": "NewBaseBdev", 00:09:17.065 "uuid": "990038ad-8754-45c0-a2b3-feec157cc811", 00:09:17.065 "is_configured": true, 00:09:17.065 "data_offset": 0, 00:09:17.065 "data_size": 65536 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "name": "BaseBdev2", 00:09:17.065 "uuid": "cedcd02d-eff1-401a-8c6c-6361ef2ce4a4", 00:09:17.065 "is_configured": true, 00:09:17.065 "data_offset": 0, 00:09:17.065 "data_size": 65536 00:09:17.065 }, 00:09:17.065 { 00:09:17.065 "name": "BaseBdev3", 00:09:17.065 "uuid": "24825a9c-4809-435b-9295-6db1a95dcaad", 00:09:17.065 "is_configured": true, 00:09:17.065 "data_offset": 0, 00:09:17.065 "data_size": 65536 00:09:17.065 } 00:09:17.065 ] 00:09:17.065 } 00:09:17.065 } 00:09:17.065 }' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:17.065 BaseBdev2 00:09:17.065 BaseBdev3' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.065 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 [2024-11-20 03:16:06.735147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.325 [2024-11-20 03:16:06.735233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.325 [2024-11-20 03:16:06.735346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.325 [2024-11-20 03:16:06.735667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.325 [2024-11-20 03:16:06.735724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67254 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67254 ']' 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67254 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67254 00:09:17.325 killing process with pid 67254 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67254' 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67254 00:09:17.325 [2024-11-20 03:16:06.783502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.325 03:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67254 00:09:17.583 [2024-11-20 03:16:07.089311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.961 00:09:18.961 real 0m10.605s 00:09:18.961 user 0m16.854s 00:09:18.961 sys 0m1.887s 00:09:18.961 ************************************ 00:09:18.961 END TEST raid_state_function_test 00:09:18.961 ************************************ 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.961 03:16:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:18.961 03:16:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.961 03:16:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.961 03:16:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.961 ************************************ 00:09:18.961 START TEST raid_state_function_test_sb 00:09:18.961 ************************************ 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67877 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.961 Process raid pid: 67877 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67877' 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67877 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67877 ']' 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.961 03:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.961 [2024-11-20 03:16:08.365542] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:18.961 [2024-11-20 03:16:08.365686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.961 [2024-11-20 03:16:08.541731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.227 [2024-11-20 03:16:08.658274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.493 [2024-11-20 03:16:08.862660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.493 [2024-11-20 03:16:08.862708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 [2024-11-20 03:16:09.204081] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.750 [2024-11-20 03:16:09.204142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.750 [2024-11-20 03:16:09.204152] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.750 [2024-11-20 03:16:09.204162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.750 [2024-11-20 03:16:09.204169] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.750 [2024-11-20 03:16:09.204177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.750 "name": "Existed_Raid", 00:09:19.750 "uuid": "94aeaa1e-ee10-45f0-98f3-d0027c9e4fb6", 00:09:19.750 "strip_size_kb": 0, 00:09:19.750 "state": "configuring", 00:09:19.750 "raid_level": "raid1", 00:09:19.750 "superblock": true, 00:09:19.750 "num_base_bdevs": 3, 00:09:19.750 "num_base_bdevs_discovered": 0, 00:09:19.750 "num_base_bdevs_operational": 3, 00:09:19.750 "base_bdevs_list": [ 00:09:19.750 { 00:09:19.750 "name": "BaseBdev1", 00:09:19.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.750 "is_configured": false, 00:09:19.750 "data_offset": 0, 00:09:19.750 "data_size": 0 00:09:19.750 }, 00:09:19.750 { 00:09:19.750 "name": "BaseBdev2", 00:09:19.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.750 "is_configured": false, 00:09:19.750 "data_offset": 0, 00:09:19.750 "data_size": 0 00:09:19.750 }, 00:09:19.750 { 00:09:19.750 "name": "BaseBdev3", 00:09:19.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.750 "is_configured": false, 00:09:19.750 "data_offset": 0, 00:09:19.750 "data_size": 0 00:09:19.750 } 00:09:19.750 ] 00:09:19.750 }' 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.750 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.315 [2024-11-20 03:16:09.703147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.315 [2024-11-20 03:16:09.703252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.315 [2024-11-20 03:16:09.715110] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.315 [2024-11-20 03:16:09.715155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.315 [2024-11-20 03:16:09.715164] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.315 [2024-11-20 03:16:09.715173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.315 [2024-11-20 03:16:09.715179] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.315 [2024-11-20 03:16:09.715187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.315 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.316 [2024-11-20 03:16:09.765724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.316 BaseBdev1 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.316 [ 00:09:20.316 { 00:09:20.316 "name": "BaseBdev1", 00:09:20.316 "aliases": [ 00:09:20.316 "7278abf5-a39b-4f94-8da0-b2babcc64bf7" 00:09:20.316 ], 00:09:20.316 "product_name": "Malloc disk", 00:09:20.316 "block_size": 512, 00:09:20.316 "num_blocks": 65536, 00:09:20.316 "uuid": "7278abf5-a39b-4f94-8da0-b2babcc64bf7", 00:09:20.316 "assigned_rate_limits": { 00:09:20.316 "rw_ios_per_sec": 0, 00:09:20.316 "rw_mbytes_per_sec": 0, 00:09:20.316 "r_mbytes_per_sec": 0, 00:09:20.316 "w_mbytes_per_sec": 0 00:09:20.316 }, 00:09:20.316 "claimed": true, 00:09:20.316 "claim_type": "exclusive_write", 00:09:20.316 "zoned": false, 00:09:20.316 "supported_io_types": { 00:09:20.316 "read": true, 00:09:20.316 "write": true, 00:09:20.316 "unmap": true, 00:09:20.316 "flush": true, 00:09:20.316 "reset": true, 00:09:20.316 "nvme_admin": false, 00:09:20.316 "nvme_io": false, 00:09:20.316 "nvme_io_md": false, 00:09:20.316 "write_zeroes": true, 00:09:20.316 "zcopy": true, 00:09:20.316 "get_zone_info": false, 00:09:20.316 "zone_management": false, 00:09:20.316 "zone_append": false, 00:09:20.316 "compare": false, 00:09:20.316 "compare_and_write": false, 00:09:20.316 "abort": true, 00:09:20.316 "seek_hole": false, 00:09:20.316 "seek_data": false, 00:09:20.316 "copy": true, 00:09:20.316 "nvme_iov_md": false 00:09:20.316 }, 00:09:20.316 "memory_domains": [ 00:09:20.316 { 00:09:20.316 "dma_device_id": "system", 00:09:20.316 "dma_device_type": 1 00:09:20.316 }, 00:09:20.316 { 00:09:20.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.316 "dma_device_type": 2 00:09:20.316 } 00:09:20.316 ], 00:09:20.316 "driver_specific": {} 00:09:20.316 } 00:09:20.316 ] 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.316 "name": "Existed_Raid", 00:09:20.316 "uuid": "ae1dd1ec-1d04-4a22-b0d0-e9428b3fee03", 00:09:20.316 "strip_size_kb": 0, 00:09:20.316 "state": "configuring", 00:09:20.316 "raid_level": "raid1", 00:09:20.316 "superblock": true, 00:09:20.316 "num_base_bdevs": 3, 00:09:20.316 "num_base_bdevs_discovered": 1, 00:09:20.316 "num_base_bdevs_operational": 3, 00:09:20.316 "base_bdevs_list": [ 00:09:20.316 { 00:09:20.316 "name": "BaseBdev1", 00:09:20.316 "uuid": "7278abf5-a39b-4f94-8da0-b2babcc64bf7", 00:09:20.316 "is_configured": true, 00:09:20.316 "data_offset": 2048, 00:09:20.316 "data_size": 63488 00:09:20.316 }, 00:09:20.316 { 00:09:20.316 "name": "BaseBdev2", 00:09:20.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.316 "is_configured": false, 00:09:20.316 "data_offset": 0, 00:09:20.316 "data_size": 0 00:09:20.316 }, 00:09:20.316 { 00:09:20.316 "name": "BaseBdev3", 00:09:20.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.316 "is_configured": false, 00:09:20.316 "data_offset": 0, 00:09:20.316 "data_size": 0 00:09:20.316 } 00:09:20.316 ] 00:09:20.316 }' 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.316 03:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.575 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.575 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.575 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.834 [2024-11-20 03:16:10.213046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.834 [2024-11-20 03:16:10.213176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.834 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.835 [2024-11-20 03:16:10.225082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.835 [2024-11-20 03:16:10.227068] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.835 [2024-11-20 03:16:10.227119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.835 [2024-11-20 03:16:10.227130] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.835 [2024-11-20 03:16:10.227141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.835 "name": "Existed_Raid", 00:09:20.835 "uuid": "797eb959-1eca-4a76-a333-a8d6f634f35c", 00:09:20.835 "strip_size_kb": 0, 00:09:20.835 "state": "configuring", 00:09:20.835 "raid_level": "raid1", 00:09:20.835 "superblock": true, 00:09:20.835 "num_base_bdevs": 3, 00:09:20.835 "num_base_bdevs_discovered": 1, 00:09:20.835 "num_base_bdevs_operational": 3, 00:09:20.835 "base_bdevs_list": [ 00:09:20.835 { 00:09:20.835 "name": "BaseBdev1", 00:09:20.835 "uuid": "7278abf5-a39b-4f94-8da0-b2babcc64bf7", 00:09:20.835 "is_configured": true, 00:09:20.835 "data_offset": 2048, 00:09:20.835 "data_size": 63488 00:09:20.835 }, 00:09:20.835 { 00:09:20.835 "name": "BaseBdev2", 00:09:20.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.835 "is_configured": false, 00:09:20.835 "data_offset": 0, 00:09:20.835 "data_size": 0 00:09:20.835 }, 00:09:20.835 { 00:09:20.835 "name": "BaseBdev3", 00:09:20.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.835 "is_configured": false, 00:09:20.835 "data_offset": 0, 00:09:20.835 "data_size": 0 00:09:20.835 } 00:09:20.835 ] 00:09:20.835 }' 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.835 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.095 [2024-11-20 03:16:10.637810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.095 BaseBdev2 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.095 [ 00:09:21.095 { 00:09:21.095 "name": "BaseBdev2", 00:09:21.095 "aliases": [ 00:09:21.095 "e164c629-3b54-4e45-be93-91ba24445d43" 00:09:21.095 ], 00:09:21.095 "product_name": "Malloc disk", 00:09:21.095 "block_size": 512, 00:09:21.095 "num_blocks": 65536, 00:09:21.095 "uuid": "e164c629-3b54-4e45-be93-91ba24445d43", 00:09:21.095 "assigned_rate_limits": { 00:09:21.095 "rw_ios_per_sec": 0, 00:09:21.095 "rw_mbytes_per_sec": 0, 00:09:21.095 "r_mbytes_per_sec": 0, 00:09:21.095 "w_mbytes_per_sec": 0 00:09:21.095 }, 00:09:21.095 "claimed": true, 00:09:21.095 "claim_type": "exclusive_write", 00:09:21.095 "zoned": false, 00:09:21.095 "supported_io_types": { 00:09:21.095 "read": true, 00:09:21.095 "write": true, 00:09:21.095 "unmap": true, 00:09:21.095 "flush": true, 00:09:21.095 "reset": true, 00:09:21.095 "nvme_admin": false, 00:09:21.095 "nvme_io": false, 00:09:21.095 "nvme_io_md": false, 00:09:21.095 "write_zeroes": true, 00:09:21.095 "zcopy": true, 00:09:21.095 "get_zone_info": false, 00:09:21.095 "zone_management": false, 00:09:21.095 "zone_append": false, 00:09:21.095 "compare": false, 00:09:21.095 "compare_and_write": false, 00:09:21.095 "abort": true, 00:09:21.095 "seek_hole": false, 00:09:21.095 "seek_data": false, 00:09:21.095 "copy": true, 00:09:21.095 "nvme_iov_md": false 00:09:21.095 }, 00:09:21.095 "memory_domains": [ 00:09:21.095 { 00:09:21.095 "dma_device_id": "system", 00:09:21.095 "dma_device_type": 1 00:09:21.095 }, 00:09:21.095 { 00:09:21.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.095 "dma_device_type": 2 00:09:21.095 } 00:09:21.095 ], 00:09:21.095 "driver_specific": {} 00:09:21.095 } 00:09:21.095 ] 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.095 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.355 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.355 "name": "Existed_Raid", 00:09:21.355 "uuid": "797eb959-1eca-4a76-a333-a8d6f634f35c", 00:09:21.355 "strip_size_kb": 0, 00:09:21.355 "state": "configuring", 00:09:21.355 "raid_level": "raid1", 00:09:21.355 "superblock": true, 00:09:21.355 "num_base_bdevs": 3, 00:09:21.355 "num_base_bdevs_discovered": 2, 00:09:21.355 "num_base_bdevs_operational": 3, 00:09:21.355 "base_bdevs_list": [ 00:09:21.355 { 00:09:21.355 "name": "BaseBdev1", 00:09:21.355 "uuid": "7278abf5-a39b-4f94-8da0-b2babcc64bf7", 00:09:21.355 "is_configured": true, 00:09:21.355 "data_offset": 2048, 00:09:21.355 "data_size": 63488 00:09:21.355 }, 00:09:21.355 { 00:09:21.355 "name": "BaseBdev2", 00:09:21.355 "uuid": "e164c629-3b54-4e45-be93-91ba24445d43", 00:09:21.355 "is_configured": true, 00:09:21.355 "data_offset": 2048, 00:09:21.355 "data_size": 63488 00:09:21.355 }, 00:09:21.355 { 00:09:21.355 "name": "BaseBdev3", 00:09:21.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.355 "is_configured": false, 00:09:21.355 "data_offset": 0, 00:09:21.355 "data_size": 0 00:09:21.355 } 00:09:21.355 ] 00:09:21.355 }' 00:09:21.355 03:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.355 03:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.615 [2024-11-20 03:16:11.179690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.615 [2024-11-20 03:16:11.179954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.615 [2024-11-20 03:16:11.179976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:21.615 [2024-11-20 03:16:11.180238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.615 [2024-11-20 03:16:11.180389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.615 [2024-11-20 03:16:11.180397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.615 [2024-11-20 03:16:11.180541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.615 BaseBdev3 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.615 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.615 [ 00:09:21.615 { 00:09:21.615 "name": "BaseBdev3", 00:09:21.615 "aliases": [ 00:09:21.615 "d9690210-d1d8-41f5-831d-5c5880baf30e" 00:09:21.615 ], 00:09:21.616 "product_name": "Malloc disk", 00:09:21.616 "block_size": 512, 00:09:21.616 "num_blocks": 65536, 00:09:21.616 "uuid": "d9690210-d1d8-41f5-831d-5c5880baf30e", 00:09:21.616 "assigned_rate_limits": { 00:09:21.616 "rw_ios_per_sec": 0, 00:09:21.616 "rw_mbytes_per_sec": 0, 00:09:21.616 "r_mbytes_per_sec": 0, 00:09:21.616 "w_mbytes_per_sec": 0 00:09:21.616 }, 00:09:21.616 "claimed": true, 00:09:21.616 "claim_type": "exclusive_write", 00:09:21.616 "zoned": false, 00:09:21.616 "supported_io_types": { 00:09:21.616 "read": true, 00:09:21.616 "write": true, 00:09:21.616 "unmap": true, 00:09:21.616 "flush": true, 00:09:21.616 "reset": true, 00:09:21.616 "nvme_admin": false, 00:09:21.616 "nvme_io": false, 00:09:21.616 "nvme_io_md": false, 00:09:21.616 "write_zeroes": true, 00:09:21.616 "zcopy": true, 00:09:21.616 "get_zone_info": false, 00:09:21.616 "zone_management": false, 00:09:21.616 "zone_append": false, 00:09:21.616 "compare": false, 00:09:21.616 "compare_and_write": false, 00:09:21.616 "abort": true, 00:09:21.616 "seek_hole": false, 00:09:21.616 "seek_data": false, 00:09:21.616 "copy": true, 00:09:21.616 "nvme_iov_md": false 00:09:21.616 }, 00:09:21.616 "memory_domains": [ 00:09:21.616 { 00:09:21.616 "dma_device_id": "system", 00:09:21.616 "dma_device_type": 1 00:09:21.616 }, 00:09:21.616 { 00:09:21.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.616 "dma_device_type": 2 00:09:21.616 } 00:09:21.616 ], 00:09:21.616 "driver_specific": {} 00:09:21.616 } 00:09:21.616 ] 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.616 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.875 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.875 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.875 "name": "Existed_Raid", 00:09:21.875 "uuid": "797eb959-1eca-4a76-a333-a8d6f634f35c", 00:09:21.875 "strip_size_kb": 0, 00:09:21.875 "state": "online", 00:09:21.875 "raid_level": "raid1", 00:09:21.875 "superblock": true, 00:09:21.875 "num_base_bdevs": 3, 00:09:21.875 "num_base_bdevs_discovered": 3, 00:09:21.875 "num_base_bdevs_operational": 3, 00:09:21.875 "base_bdevs_list": [ 00:09:21.875 { 00:09:21.875 "name": "BaseBdev1", 00:09:21.875 "uuid": "7278abf5-a39b-4f94-8da0-b2babcc64bf7", 00:09:21.875 "is_configured": true, 00:09:21.875 "data_offset": 2048, 00:09:21.875 "data_size": 63488 00:09:21.875 }, 00:09:21.875 { 00:09:21.875 "name": "BaseBdev2", 00:09:21.875 "uuid": "e164c629-3b54-4e45-be93-91ba24445d43", 00:09:21.875 "is_configured": true, 00:09:21.875 "data_offset": 2048, 00:09:21.875 "data_size": 63488 00:09:21.875 }, 00:09:21.875 { 00:09:21.875 "name": "BaseBdev3", 00:09:21.875 "uuid": "d9690210-d1d8-41f5-831d-5c5880baf30e", 00:09:21.875 "is_configured": true, 00:09:21.875 "data_offset": 2048, 00:09:21.875 "data_size": 63488 00:09:21.875 } 00:09:21.875 ] 00:09:21.875 }' 00:09:21.875 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.875 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.135 [2024-11-20 03:16:11.611359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.135 "name": "Existed_Raid", 00:09:22.135 "aliases": [ 00:09:22.135 "797eb959-1eca-4a76-a333-a8d6f634f35c" 00:09:22.135 ], 00:09:22.135 "product_name": "Raid Volume", 00:09:22.135 "block_size": 512, 00:09:22.135 "num_blocks": 63488, 00:09:22.135 "uuid": "797eb959-1eca-4a76-a333-a8d6f634f35c", 00:09:22.135 "assigned_rate_limits": { 00:09:22.135 "rw_ios_per_sec": 0, 00:09:22.135 "rw_mbytes_per_sec": 0, 00:09:22.135 "r_mbytes_per_sec": 0, 00:09:22.135 "w_mbytes_per_sec": 0 00:09:22.135 }, 00:09:22.135 "claimed": false, 00:09:22.135 "zoned": false, 00:09:22.135 "supported_io_types": { 00:09:22.135 "read": true, 00:09:22.135 "write": true, 00:09:22.135 "unmap": false, 00:09:22.135 "flush": false, 00:09:22.135 "reset": true, 00:09:22.135 "nvme_admin": false, 00:09:22.135 "nvme_io": false, 00:09:22.135 "nvme_io_md": false, 00:09:22.135 "write_zeroes": true, 00:09:22.135 "zcopy": false, 00:09:22.135 "get_zone_info": false, 00:09:22.135 "zone_management": false, 00:09:22.135 "zone_append": false, 00:09:22.135 "compare": false, 00:09:22.135 "compare_and_write": false, 00:09:22.135 "abort": false, 00:09:22.135 "seek_hole": false, 00:09:22.135 "seek_data": false, 00:09:22.135 "copy": false, 00:09:22.135 "nvme_iov_md": false 00:09:22.135 }, 00:09:22.135 "memory_domains": [ 00:09:22.135 { 00:09:22.135 "dma_device_id": "system", 00:09:22.135 "dma_device_type": 1 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.135 "dma_device_type": 2 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "dma_device_id": "system", 00:09:22.135 "dma_device_type": 1 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.135 "dma_device_type": 2 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "dma_device_id": "system", 00:09:22.135 "dma_device_type": 1 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.135 "dma_device_type": 2 00:09:22.135 } 00:09:22.135 ], 00:09:22.135 "driver_specific": { 00:09:22.135 "raid": { 00:09:22.135 "uuid": "797eb959-1eca-4a76-a333-a8d6f634f35c", 00:09:22.135 "strip_size_kb": 0, 00:09:22.135 "state": "online", 00:09:22.135 "raid_level": "raid1", 00:09:22.135 "superblock": true, 00:09:22.135 "num_base_bdevs": 3, 00:09:22.135 "num_base_bdevs_discovered": 3, 00:09:22.135 "num_base_bdevs_operational": 3, 00:09:22.135 "base_bdevs_list": [ 00:09:22.135 { 00:09:22.135 "name": "BaseBdev1", 00:09:22.135 "uuid": "7278abf5-a39b-4f94-8da0-b2babcc64bf7", 00:09:22.135 "is_configured": true, 00:09:22.135 "data_offset": 2048, 00:09:22.135 "data_size": 63488 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "name": "BaseBdev2", 00:09:22.135 "uuid": "e164c629-3b54-4e45-be93-91ba24445d43", 00:09:22.135 "is_configured": true, 00:09:22.135 "data_offset": 2048, 00:09:22.135 "data_size": 63488 00:09:22.135 }, 00:09:22.135 { 00:09:22.135 "name": "BaseBdev3", 00:09:22.135 "uuid": "d9690210-d1d8-41f5-831d-5c5880baf30e", 00:09:22.135 "is_configured": true, 00:09:22.135 "data_offset": 2048, 00:09:22.135 "data_size": 63488 00:09:22.135 } 00:09:22.135 ] 00:09:22.135 } 00:09:22.135 } 00:09:22.135 }' 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:22.135 BaseBdev2 00:09:22.135 BaseBdev3' 00:09:22.135 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.136 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 [2024-11-20 03:16:11.838713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.396 "name": "Existed_Raid", 00:09:22.396 "uuid": "797eb959-1eca-4a76-a333-a8d6f634f35c", 00:09:22.396 "strip_size_kb": 0, 00:09:22.396 "state": "online", 00:09:22.396 "raid_level": "raid1", 00:09:22.396 "superblock": true, 00:09:22.396 "num_base_bdevs": 3, 00:09:22.396 "num_base_bdevs_discovered": 2, 00:09:22.396 "num_base_bdevs_operational": 2, 00:09:22.396 "base_bdevs_list": [ 00:09:22.397 { 00:09:22.397 "name": null, 00:09:22.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.397 "is_configured": false, 00:09:22.397 "data_offset": 0, 00:09:22.397 "data_size": 63488 00:09:22.397 }, 00:09:22.397 { 00:09:22.397 "name": "BaseBdev2", 00:09:22.397 "uuid": "e164c629-3b54-4e45-be93-91ba24445d43", 00:09:22.397 "is_configured": true, 00:09:22.397 "data_offset": 2048, 00:09:22.397 "data_size": 63488 00:09:22.397 }, 00:09:22.397 { 00:09:22.397 "name": "BaseBdev3", 00:09:22.397 "uuid": "d9690210-d1d8-41f5-831d-5c5880baf30e", 00:09:22.397 "is_configured": true, 00:09:22.397 "data_offset": 2048, 00:09:22.397 "data_size": 63488 00:09:22.397 } 00:09:22.397 ] 00:09:22.397 }' 00:09:22.397 03:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.397 03:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 [2024-11-20 03:16:12.412153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.967 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.967 [2024-11-20 03:16:12.569501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.967 [2024-11-20 03:16:12.569607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.227 [2024-11-20 03:16:12.664323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.227 [2024-11-20 03:16:12.664383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.227 [2024-11-20 03:16:12.664395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 BaseBdev2 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 [ 00:09:23.228 { 00:09:23.228 "name": "BaseBdev2", 00:09:23.228 "aliases": [ 00:09:23.228 "8fa562d8-2df1-4cae-9a91-e3e2cd417117" 00:09:23.228 ], 00:09:23.228 "product_name": "Malloc disk", 00:09:23.228 "block_size": 512, 00:09:23.228 "num_blocks": 65536, 00:09:23.228 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:23.228 "assigned_rate_limits": { 00:09:23.228 "rw_ios_per_sec": 0, 00:09:23.228 "rw_mbytes_per_sec": 0, 00:09:23.228 "r_mbytes_per_sec": 0, 00:09:23.228 "w_mbytes_per_sec": 0 00:09:23.228 }, 00:09:23.228 "claimed": false, 00:09:23.228 "zoned": false, 00:09:23.228 "supported_io_types": { 00:09:23.228 "read": true, 00:09:23.228 "write": true, 00:09:23.228 "unmap": true, 00:09:23.228 "flush": true, 00:09:23.228 "reset": true, 00:09:23.228 "nvme_admin": false, 00:09:23.228 "nvme_io": false, 00:09:23.228 "nvme_io_md": false, 00:09:23.228 "write_zeroes": true, 00:09:23.228 "zcopy": true, 00:09:23.228 "get_zone_info": false, 00:09:23.228 "zone_management": false, 00:09:23.228 "zone_append": false, 00:09:23.228 "compare": false, 00:09:23.228 "compare_and_write": false, 00:09:23.228 "abort": true, 00:09:23.228 "seek_hole": false, 00:09:23.228 "seek_data": false, 00:09:23.228 "copy": true, 00:09:23.228 "nvme_iov_md": false 00:09:23.228 }, 00:09:23.228 "memory_domains": [ 00:09:23.228 { 00:09:23.228 "dma_device_id": "system", 00:09:23.228 "dma_device_type": 1 00:09:23.228 }, 00:09:23.228 { 00:09:23.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.228 "dma_device_type": 2 00:09:23.228 } 00:09:23.228 ], 00:09:23.228 "driver_specific": {} 00:09:23.228 } 00:09:23.228 ] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 BaseBdev3 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.488 [ 00:09:23.488 { 00:09:23.488 "name": "BaseBdev3", 00:09:23.488 "aliases": [ 00:09:23.488 "7d08055d-38b0-450f-80af-20c6ac457ea6" 00:09:23.488 ], 00:09:23.488 "product_name": "Malloc disk", 00:09:23.488 "block_size": 512, 00:09:23.488 "num_blocks": 65536, 00:09:23.488 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:23.488 "assigned_rate_limits": { 00:09:23.488 "rw_ios_per_sec": 0, 00:09:23.488 "rw_mbytes_per_sec": 0, 00:09:23.488 "r_mbytes_per_sec": 0, 00:09:23.488 "w_mbytes_per_sec": 0 00:09:23.488 }, 00:09:23.488 "claimed": false, 00:09:23.488 "zoned": false, 00:09:23.488 "supported_io_types": { 00:09:23.488 "read": true, 00:09:23.488 "write": true, 00:09:23.488 "unmap": true, 00:09:23.488 "flush": true, 00:09:23.488 "reset": true, 00:09:23.488 "nvme_admin": false, 00:09:23.488 "nvme_io": false, 00:09:23.488 "nvme_io_md": false, 00:09:23.488 "write_zeroes": true, 00:09:23.488 "zcopy": true, 00:09:23.488 "get_zone_info": false, 00:09:23.488 "zone_management": false, 00:09:23.488 "zone_append": false, 00:09:23.488 "compare": false, 00:09:23.488 "compare_and_write": false, 00:09:23.488 "abort": true, 00:09:23.488 "seek_hole": false, 00:09:23.488 "seek_data": false, 00:09:23.488 "copy": true, 00:09:23.488 "nvme_iov_md": false 00:09:23.488 }, 00:09:23.488 "memory_domains": [ 00:09:23.488 { 00:09:23.488 "dma_device_id": "system", 00:09:23.488 "dma_device_type": 1 00:09:23.488 }, 00:09:23.488 { 00:09:23.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.488 "dma_device_type": 2 00:09:23.488 } 00:09:23.488 ], 00:09:23.488 "driver_specific": {} 00:09:23.488 } 00:09:23.488 ] 00:09:23.488 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.489 [2024-11-20 03:16:12.879850] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.489 [2024-11-20 03:16:12.879952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.489 [2024-11-20 03:16:12.879997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.489 [2024-11-20 03:16:12.881853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.489 "name": "Existed_Raid", 00:09:23.489 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:23.489 "strip_size_kb": 0, 00:09:23.489 "state": "configuring", 00:09:23.489 "raid_level": "raid1", 00:09:23.489 "superblock": true, 00:09:23.489 "num_base_bdevs": 3, 00:09:23.489 "num_base_bdevs_discovered": 2, 00:09:23.489 "num_base_bdevs_operational": 3, 00:09:23.489 "base_bdevs_list": [ 00:09:23.489 { 00:09:23.489 "name": "BaseBdev1", 00:09:23.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.489 "is_configured": false, 00:09:23.489 "data_offset": 0, 00:09:23.489 "data_size": 0 00:09:23.489 }, 00:09:23.489 { 00:09:23.489 "name": "BaseBdev2", 00:09:23.489 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:23.489 "is_configured": true, 00:09:23.489 "data_offset": 2048, 00:09:23.489 "data_size": 63488 00:09:23.489 }, 00:09:23.489 { 00:09:23.489 "name": "BaseBdev3", 00:09:23.489 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:23.489 "is_configured": true, 00:09:23.489 "data_offset": 2048, 00:09:23.489 "data_size": 63488 00:09:23.489 } 00:09:23.489 ] 00:09:23.489 }' 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.489 03:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.749 [2024-11-20 03:16:13.295142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.749 "name": "Existed_Raid", 00:09:23.749 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:23.749 "strip_size_kb": 0, 00:09:23.749 "state": "configuring", 00:09:23.749 "raid_level": "raid1", 00:09:23.749 "superblock": true, 00:09:23.749 "num_base_bdevs": 3, 00:09:23.749 "num_base_bdevs_discovered": 1, 00:09:23.749 "num_base_bdevs_operational": 3, 00:09:23.749 "base_bdevs_list": [ 00:09:23.749 { 00:09:23.749 "name": "BaseBdev1", 00:09:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.749 "is_configured": false, 00:09:23.749 "data_offset": 0, 00:09:23.749 "data_size": 0 00:09:23.749 }, 00:09:23.749 { 00:09:23.749 "name": null, 00:09:23.749 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:23.749 "is_configured": false, 00:09:23.749 "data_offset": 0, 00:09:23.749 "data_size": 63488 00:09:23.749 }, 00:09:23.749 { 00:09:23.749 "name": "BaseBdev3", 00:09:23.749 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:23.749 "is_configured": true, 00:09:23.749 "data_offset": 2048, 00:09:23.749 "data_size": 63488 00:09:23.749 } 00:09:23.749 ] 00:09:23.749 }' 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.749 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.318 [2024-11-20 03:16:13.798938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.318 BaseBdev1 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.318 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.318 [ 00:09:24.318 { 00:09:24.318 "name": "BaseBdev1", 00:09:24.318 "aliases": [ 00:09:24.318 "0546ac38-f90e-4a30-891e-2463e954730b" 00:09:24.318 ], 00:09:24.318 "product_name": "Malloc disk", 00:09:24.318 "block_size": 512, 00:09:24.318 "num_blocks": 65536, 00:09:24.318 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:24.318 "assigned_rate_limits": { 00:09:24.318 "rw_ios_per_sec": 0, 00:09:24.318 "rw_mbytes_per_sec": 0, 00:09:24.318 "r_mbytes_per_sec": 0, 00:09:24.318 "w_mbytes_per_sec": 0 00:09:24.318 }, 00:09:24.318 "claimed": true, 00:09:24.318 "claim_type": "exclusive_write", 00:09:24.318 "zoned": false, 00:09:24.318 "supported_io_types": { 00:09:24.318 "read": true, 00:09:24.318 "write": true, 00:09:24.318 "unmap": true, 00:09:24.318 "flush": true, 00:09:24.318 "reset": true, 00:09:24.318 "nvme_admin": false, 00:09:24.318 "nvme_io": false, 00:09:24.318 "nvme_io_md": false, 00:09:24.318 "write_zeroes": true, 00:09:24.318 "zcopy": true, 00:09:24.318 "get_zone_info": false, 00:09:24.318 "zone_management": false, 00:09:24.318 "zone_append": false, 00:09:24.318 "compare": false, 00:09:24.318 "compare_and_write": false, 00:09:24.318 "abort": true, 00:09:24.318 "seek_hole": false, 00:09:24.318 "seek_data": false, 00:09:24.318 "copy": true, 00:09:24.318 "nvme_iov_md": false 00:09:24.318 }, 00:09:24.318 "memory_domains": [ 00:09:24.318 { 00:09:24.318 "dma_device_id": "system", 00:09:24.318 "dma_device_type": 1 00:09:24.318 }, 00:09:24.318 { 00:09:24.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.319 "dma_device_type": 2 00:09:24.319 } 00:09:24.319 ], 00:09:24.319 "driver_specific": {} 00:09:24.319 } 00:09:24.319 ] 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.319 "name": "Existed_Raid", 00:09:24.319 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:24.319 "strip_size_kb": 0, 00:09:24.319 "state": "configuring", 00:09:24.319 "raid_level": "raid1", 00:09:24.319 "superblock": true, 00:09:24.319 "num_base_bdevs": 3, 00:09:24.319 "num_base_bdevs_discovered": 2, 00:09:24.319 "num_base_bdevs_operational": 3, 00:09:24.319 "base_bdevs_list": [ 00:09:24.319 { 00:09:24.319 "name": "BaseBdev1", 00:09:24.319 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:24.319 "is_configured": true, 00:09:24.319 "data_offset": 2048, 00:09:24.319 "data_size": 63488 00:09:24.319 }, 00:09:24.319 { 00:09:24.319 "name": null, 00:09:24.319 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:24.319 "is_configured": false, 00:09:24.319 "data_offset": 0, 00:09:24.319 "data_size": 63488 00:09:24.319 }, 00:09:24.319 { 00:09:24.319 "name": "BaseBdev3", 00:09:24.319 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:24.319 "is_configured": true, 00:09:24.319 "data_offset": 2048, 00:09:24.319 "data_size": 63488 00:09:24.319 } 00:09:24.319 ] 00:09:24.319 }' 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.319 03:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.579 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.579 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.579 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.579 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.838 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.838 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:24.838 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:24.838 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.838 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.839 [2024-11-20 03:16:14.258217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.839 "name": "Existed_Raid", 00:09:24.839 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:24.839 "strip_size_kb": 0, 00:09:24.839 "state": "configuring", 00:09:24.839 "raid_level": "raid1", 00:09:24.839 "superblock": true, 00:09:24.839 "num_base_bdevs": 3, 00:09:24.839 "num_base_bdevs_discovered": 1, 00:09:24.839 "num_base_bdevs_operational": 3, 00:09:24.839 "base_bdevs_list": [ 00:09:24.839 { 00:09:24.839 "name": "BaseBdev1", 00:09:24.839 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:24.839 "is_configured": true, 00:09:24.839 "data_offset": 2048, 00:09:24.839 "data_size": 63488 00:09:24.839 }, 00:09:24.839 { 00:09:24.839 "name": null, 00:09:24.839 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:24.839 "is_configured": false, 00:09:24.839 "data_offset": 0, 00:09:24.839 "data_size": 63488 00:09:24.839 }, 00:09:24.839 { 00:09:24.839 "name": null, 00:09:24.839 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:24.839 "is_configured": false, 00:09:24.839 "data_offset": 0, 00:09:24.839 "data_size": 63488 00:09:24.839 } 00:09:24.839 ] 00:09:24.839 }' 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.839 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.099 [2024-11-20 03:16:14.717482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.099 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.359 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.359 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.359 "name": "Existed_Raid", 00:09:25.359 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:25.359 "strip_size_kb": 0, 00:09:25.359 "state": "configuring", 00:09:25.359 "raid_level": "raid1", 00:09:25.359 "superblock": true, 00:09:25.359 "num_base_bdevs": 3, 00:09:25.359 "num_base_bdevs_discovered": 2, 00:09:25.359 "num_base_bdevs_operational": 3, 00:09:25.359 "base_bdevs_list": [ 00:09:25.359 { 00:09:25.359 "name": "BaseBdev1", 00:09:25.359 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:25.359 "is_configured": true, 00:09:25.359 "data_offset": 2048, 00:09:25.359 "data_size": 63488 00:09:25.359 }, 00:09:25.359 { 00:09:25.359 "name": null, 00:09:25.359 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:25.359 "is_configured": false, 00:09:25.359 "data_offset": 0, 00:09:25.359 "data_size": 63488 00:09:25.359 }, 00:09:25.359 { 00:09:25.359 "name": "BaseBdev3", 00:09:25.359 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:25.359 "is_configured": true, 00:09:25.359 "data_offset": 2048, 00:09:25.359 "data_size": 63488 00:09:25.359 } 00:09:25.359 ] 00:09:25.359 }' 00:09:25.359 03:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.359 03:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.619 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 [2024-11-20 03:16:15.228588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.879 "name": "Existed_Raid", 00:09:25.879 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:25.879 "strip_size_kb": 0, 00:09:25.879 "state": "configuring", 00:09:25.879 "raid_level": "raid1", 00:09:25.879 "superblock": true, 00:09:25.879 "num_base_bdevs": 3, 00:09:25.879 "num_base_bdevs_discovered": 1, 00:09:25.879 "num_base_bdevs_operational": 3, 00:09:25.879 "base_bdevs_list": [ 00:09:25.879 { 00:09:25.879 "name": null, 00:09:25.879 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:25.879 "is_configured": false, 00:09:25.879 "data_offset": 0, 00:09:25.879 "data_size": 63488 00:09:25.879 }, 00:09:25.879 { 00:09:25.879 "name": null, 00:09:25.879 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:25.879 "is_configured": false, 00:09:25.879 "data_offset": 0, 00:09:25.879 "data_size": 63488 00:09:25.879 }, 00:09:25.879 { 00:09:25.879 "name": "BaseBdev3", 00:09:25.879 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:25.879 "is_configured": true, 00:09:25.879 "data_offset": 2048, 00:09:25.879 "data_size": 63488 00:09:25.879 } 00:09:25.879 ] 00:09:25.879 }' 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.879 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 [2024-11-20 03:16:15.810718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.449 "name": "Existed_Raid", 00:09:26.449 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:26.449 "strip_size_kb": 0, 00:09:26.449 "state": "configuring", 00:09:26.449 "raid_level": "raid1", 00:09:26.449 "superblock": true, 00:09:26.449 "num_base_bdevs": 3, 00:09:26.449 "num_base_bdevs_discovered": 2, 00:09:26.449 "num_base_bdevs_operational": 3, 00:09:26.449 "base_bdevs_list": [ 00:09:26.449 { 00:09:26.449 "name": null, 00:09:26.449 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:26.449 "is_configured": false, 00:09:26.449 "data_offset": 0, 00:09:26.449 "data_size": 63488 00:09:26.449 }, 00:09:26.449 { 00:09:26.449 "name": "BaseBdev2", 00:09:26.449 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:26.449 "is_configured": true, 00:09:26.449 "data_offset": 2048, 00:09:26.449 "data_size": 63488 00:09:26.449 }, 00:09:26.449 { 00:09:26.449 "name": "BaseBdev3", 00:09:26.449 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:26.449 "is_configured": true, 00:09:26.449 "data_offset": 2048, 00:09:26.449 "data_size": 63488 00:09:26.449 } 00:09:26.449 ] 00:09:26.449 }' 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.449 03:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0546ac38-f90e-4a30-891e-2463e954730b 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 [2024-11-20 03:16:16.334746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:26.709 [2024-11-20 03:16:16.334965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:26.709 [2024-11-20 03:16:16.334978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.709 [2024-11-20 03:16:16.335210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:26.709 [2024-11-20 03:16:16.335355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:26.709 [2024-11-20 03:16:16.335367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:26.709 [2024-11-20 03:16:16.335497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.709 NewBaseBdev 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.709 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.969 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.969 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:26.969 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.970 [ 00:09:26.970 { 00:09:26.970 "name": "NewBaseBdev", 00:09:26.970 "aliases": [ 00:09:26.970 "0546ac38-f90e-4a30-891e-2463e954730b" 00:09:26.970 ], 00:09:26.970 "product_name": "Malloc disk", 00:09:26.970 "block_size": 512, 00:09:26.970 "num_blocks": 65536, 00:09:26.970 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:26.970 "assigned_rate_limits": { 00:09:26.970 "rw_ios_per_sec": 0, 00:09:26.970 "rw_mbytes_per_sec": 0, 00:09:26.970 "r_mbytes_per_sec": 0, 00:09:26.970 "w_mbytes_per_sec": 0 00:09:26.970 }, 00:09:26.970 "claimed": true, 00:09:26.970 "claim_type": "exclusive_write", 00:09:26.970 "zoned": false, 00:09:26.970 "supported_io_types": { 00:09:26.970 "read": true, 00:09:26.970 "write": true, 00:09:26.970 "unmap": true, 00:09:26.970 "flush": true, 00:09:26.970 "reset": true, 00:09:26.970 "nvme_admin": false, 00:09:26.970 "nvme_io": false, 00:09:26.970 "nvme_io_md": false, 00:09:26.970 "write_zeroes": true, 00:09:26.970 "zcopy": true, 00:09:26.970 "get_zone_info": false, 00:09:26.970 "zone_management": false, 00:09:26.970 "zone_append": false, 00:09:26.970 "compare": false, 00:09:26.970 "compare_and_write": false, 00:09:26.970 "abort": true, 00:09:26.970 "seek_hole": false, 00:09:26.970 "seek_data": false, 00:09:26.970 "copy": true, 00:09:26.970 "nvme_iov_md": false 00:09:26.970 }, 00:09:26.970 "memory_domains": [ 00:09:26.970 { 00:09:26.970 "dma_device_id": "system", 00:09:26.970 "dma_device_type": 1 00:09:26.970 }, 00:09:26.970 { 00:09:26.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.970 "dma_device_type": 2 00:09:26.970 } 00:09:26.970 ], 00:09:26.970 "driver_specific": {} 00:09:26.970 } 00:09:26.970 ] 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.970 "name": "Existed_Raid", 00:09:26.970 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:26.970 "strip_size_kb": 0, 00:09:26.970 "state": "online", 00:09:26.970 "raid_level": "raid1", 00:09:26.970 "superblock": true, 00:09:26.970 "num_base_bdevs": 3, 00:09:26.970 "num_base_bdevs_discovered": 3, 00:09:26.970 "num_base_bdevs_operational": 3, 00:09:26.970 "base_bdevs_list": [ 00:09:26.970 { 00:09:26.970 "name": "NewBaseBdev", 00:09:26.970 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:26.970 "is_configured": true, 00:09:26.970 "data_offset": 2048, 00:09:26.970 "data_size": 63488 00:09:26.970 }, 00:09:26.970 { 00:09:26.970 "name": "BaseBdev2", 00:09:26.970 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:26.970 "is_configured": true, 00:09:26.970 "data_offset": 2048, 00:09:26.970 "data_size": 63488 00:09:26.970 }, 00:09:26.970 { 00:09:26.970 "name": "BaseBdev3", 00:09:26.970 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:26.970 "is_configured": true, 00:09:26.970 "data_offset": 2048, 00:09:26.970 "data_size": 63488 00:09:26.970 } 00:09:26.970 ] 00:09:26.970 }' 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.970 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.229 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.230 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.230 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.230 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.230 [2024-11-20 03:16:16.850249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.489 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.489 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.489 "name": "Existed_Raid", 00:09:27.489 "aliases": [ 00:09:27.489 "cadf0496-6e9c-4d34-b91a-f3b9c969fe00" 00:09:27.489 ], 00:09:27.489 "product_name": "Raid Volume", 00:09:27.489 "block_size": 512, 00:09:27.489 "num_blocks": 63488, 00:09:27.489 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:27.489 "assigned_rate_limits": { 00:09:27.489 "rw_ios_per_sec": 0, 00:09:27.489 "rw_mbytes_per_sec": 0, 00:09:27.489 "r_mbytes_per_sec": 0, 00:09:27.489 "w_mbytes_per_sec": 0 00:09:27.489 }, 00:09:27.489 "claimed": false, 00:09:27.489 "zoned": false, 00:09:27.489 "supported_io_types": { 00:09:27.489 "read": true, 00:09:27.489 "write": true, 00:09:27.489 "unmap": false, 00:09:27.489 "flush": false, 00:09:27.489 "reset": true, 00:09:27.489 "nvme_admin": false, 00:09:27.489 "nvme_io": false, 00:09:27.489 "nvme_io_md": false, 00:09:27.489 "write_zeroes": true, 00:09:27.489 "zcopy": false, 00:09:27.489 "get_zone_info": false, 00:09:27.489 "zone_management": false, 00:09:27.489 "zone_append": false, 00:09:27.489 "compare": false, 00:09:27.489 "compare_and_write": false, 00:09:27.489 "abort": false, 00:09:27.489 "seek_hole": false, 00:09:27.489 "seek_data": false, 00:09:27.489 "copy": false, 00:09:27.489 "nvme_iov_md": false 00:09:27.489 }, 00:09:27.489 "memory_domains": [ 00:09:27.489 { 00:09:27.489 "dma_device_id": "system", 00:09:27.489 "dma_device_type": 1 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.489 "dma_device_type": 2 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "dma_device_id": "system", 00:09:27.489 "dma_device_type": 1 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.489 "dma_device_type": 2 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "dma_device_id": "system", 00:09:27.489 "dma_device_type": 1 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.489 "dma_device_type": 2 00:09:27.489 } 00:09:27.489 ], 00:09:27.489 "driver_specific": { 00:09:27.489 "raid": { 00:09:27.489 "uuid": "cadf0496-6e9c-4d34-b91a-f3b9c969fe00", 00:09:27.489 "strip_size_kb": 0, 00:09:27.489 "state": "online", 00:09:27.489 "raid_level": "raid1", 00:09:27.489 "superblock": true, 00:09:27.489 "num_base_bdevs": 3, 00:09:27.489 "num_base_bdevs_discovered": 3, 00:09:27.489 "num_base_bdevs_operational": 3, 00:09:27.489 "base_bdevs_list": [ 00:09:27.489 { 00:09:27.489 "name": "NewBaseBdev", 00:09:27.489 "uuid": "0546ac38-f90e-4a30-891e-2463e954730b", 00:09:27.489 "is_configured": true, 00:09:27.489 "data_offset": 2048, 00:09:27.489 "data_size": 63488 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "name": "BaseBdev2", 00:09:27.489 "uuid": "8fa562d8-2df1-4cae-9a91-e3e2cd417117", 00:09:27.489 "is_configured": true, 00:09:27.489 "data_offset": 2048, 00:09:27.489 "data_size": 63488 00:09:27.489 }, 00:09:27.489 { 00:09:27.489 "name": "BaseBdev3", 00:09:27.489 "uuid": "7d08055d-38b0-450f-80af-20c6ac457ea6", 00:09:27.489 "is_configured": true, 00:09:27.489 "data_offset": 2048, 00:09:27.489 "data_size": 63488 00:09:27.489 } 00:09:27.489 ] 00:09:27.489 } 00:09:27.490 } 00:09:27.490 }' 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:27.490 BaseBdev2 00:09:27.490 BaseBdev3' 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.490 03:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 [2024-11-20 03:16:17.097527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.490 [2024-11-20 03:16:17.097561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.490 [2024-11-20 03:16:17.097673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.490 [2024-11-20 03:16:17.097990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.490 [2024-11-20 03:16:17.098000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67877 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67877 ']' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67877 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.490 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67877 00:09:27.749 killing process with pid 67877 00:09:27.749 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.749 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.749 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67877' 00:09:27.749 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67877 00:09:27.749 [2024-11-20 03:16:17.134512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.749 03:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67877 00:09:28.010 [2024-11-20 03:16:17.438750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.953 03:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:28.953 00:09:28.953 real 0m10.276s 00:09:28.953 user 0m16.299s 00:09:28.953 sys 0m1.758s 00:09:28.953 03:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.953 ************************************ 00:09:28.953 END TEST raid_state_function_test_sb 00:09:28.953 ************************************ 00:09:28.953 03:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.212 03:16:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:29.212 03:16:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.212 03:16:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.212 03:16:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.213 ************************************ 00:09:29.213 START TEST raid_superblock_test 00:09:29.213 ************************************ 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68498 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:29.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68498 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68498 ']' 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.213 03:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.213 [2024-11-20 03:16:18.705893] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:29.213 [2024-11-20 03:16:18.706100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68498 ] 00:09:29.473 [2024-11-20 03:16:18.877650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.473 [2024-11-20 03:16:18.997802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.732 [2024-11-20 03:16:19.203204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.732 [2024-11-20 03:16:19.203351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.992 malloc1 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.992 [2024-11-20 03:16:19.594340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:29.992 [2024-11-20 03:16:19.594455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.992 [2024-11-20 03:16:19.594508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:29.992 [2024-11-20 03:16:19.594539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.992 [2024-11-20 03:16:19.596845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.992 [2024-11-20 03:16:19.596923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:29.992 pt1 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.992 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.252 malloc2 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.252 [2024-11-20 03:16:19.649956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.252 [2024-11-20 03:16:19.650016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.252 [2024-11-20 03:16:19.650040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:30.252 [2024-11-20 03:16:19.650048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.252 [2024-11-20 03:16:19.652231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.252 [2024-11-20 03:16:19.652311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.252 pt2 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.252 malloc3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.252 [2024-11-20 03:16:19.716761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.252 [2024-11-20 03:16:19.716888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.252 [2024-11-20 03:16:19.716933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:30.252 [2024-11-20 03:16:19.716960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.252 [2024-11-20 03:16:19.719212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.252 [2024-11-20 03:16:19.719297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.252 pt3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.252 [2024-11-20 03:16:19.728803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.252 [2024-11-20 03:16:19.730701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.252 [2024-11-20 03:16:19.730824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.252 [2024-11-20 03:16:19.731012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:30.252 [2024-11-20 03:16:19.731062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.252 [2024-11-20 03:16:19.731344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.252 [2024-11-20 03:16:19.731563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:30.252 [2024-11-20 03:16:19.731621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:30.252 [2024-11-20 03:16:19.731832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.252 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.253 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.253 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.253 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.253 "name": "raid_bdev1", 00:09:30.253 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:30.253 "strip_size_kb": 0, 00:09:30.253 "state": "online", 00:09:30.253 "raid_level": "raid1", 00:09:30.253 "superblock": true, 00:09:30.253 "num_base_bdevs": 3, 00:09:30.253 "num_base_bdevs_discovered": 3, 00:09:30.253 "num_base_bdevs_operational": 3, 00:09:30.253 "base_bdevs_list": [ 00:09:30.253 { 00:09:30.253 "name": "pt1", 00:09:30.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.253 "is_configured": true, 00:09:30.253 "data_offset": 2048, 00:09:30.253 "data_size": 63488 00:09:30.253 }, 00:09:30.253 { 00:09:30.253 "name": "pt2", 00:09:30.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.253 "is_configured": true, 00:09:30.253 "data_offset": 2048, 00:09:30.253 "data_size": 63488 00:09:30.253 }, 00:09:30.253 { 00:09:30.253 "name": "pt3", 00:09:30.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.253 "is_configured": true, 00:09:30.253 "data_offset": 2048, 00:09:30.253 "data_size": 63488 00:09:30.253 } 00:09:30.253 ] 00:09:30.253 }' 00:09:30.253 03:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.253 03:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.822 [2024-11-20 03:16:20.160363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.822 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.822 "name": "raid_bdev1", 00:09:30.822 "aliases": [ 00:09:30.822 "8c8517dc-e55c-4c31-90a0-e1dd972bff74" 00:09:30.822 ], 00:09:30.822 "product_name": "Raid Volume", 00:09:30.822 "block_size": 512, 00:09:30.822 "num_blocks": 63488, 00:09:30.822 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:30.822 "assigned_rate_limits": { 00:09:30.822 "rw_ios_per_sec": 0, 00:09:30.822 "rw_mbytes_per_sec": 0, 00:09:30.822 "r_mbytes_per_sec": 0, 00:09:30.822 "w_mbytes_per_sec": 0 00:09:30.822 }, 00:09:30.822 "claimed": false, 00:09:30.822 "zoned": false, 00:09:30.822 "supported_io_types": { 00:09:30.822 "read": true, 00:09:30.822 "write": true, 00:09:30.822 "unmap": false, 00:09:30.822 "flush": false, 00:09:30.822 "reset": true, 00:09:30.822 "nvme_admin": false, 00:09:30.822 "nvme_io": false, 00:09:30.822 "nvme_io_md": false, 00:09:30.822 "write_zeroes": true, 00:09:30.822 "zcopy": false, 00:09:30.822 "get_zone_info": false, 00:09:30.822 "zone_management": false, 00:09:30.822 "zone_append": false, 00:09:30.822 "compare": false, 00:09:30.822 "compare_and_write": false, 00:09:30.822 "abort": false, 00:09:30.822 "seek_hole": false, 00:09:30.822 "seek_data": false, 00:09:30.822 "copy": false, 00:09:30.822 "nvme_iov_md": false 00:09:30.822 }, 00:09:30.822 "memory_domains": [ 00:09:30.822 { 00:09:30.822 "dma_device_id": "system", 00:09:30.822 "dma_device_type": 1 00:09:30.822 }, 00:09:30.822 { 00:09:30.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.822 "dma_device_type": 2 00:09:30.823 }, 00:09:30.823 { 00:09:30.823 "dma_device_id": "system", 00:09:30.823 "dma_device_type": 1 00:09:30.823 }, 00:09:30.823 { 00:09:30.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.823 "dma_device_type": 2 00:09:30.823 }, 00:09:30.823 { 00:09:30.823 "dma_device_id": "system", 00:09:30.823 "dma_device_type": 1 00:09:30.823 }, 00:09:30.823 { 00:09:30.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.823 "dma_device_type": 2 00:09:30.823 } 00:09:30.823 ], 00:09:30.823 "driver_specific": { 00:09:30.823 "raid": { 00:09:30.823 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:30.823 "strip_size_kb": 0, 00:09:30.823 "state": "online", 00:09:30.823 "raid_level": "raid1", 00:09:30.823 "superblock": true, 00:09:30.823 "num_base_bdevs": 3, 00:09:30.823 "num_base_bdevs_discovered": 3, 00:09:30.823 "num_base_bdevs_operational": 3, 00:09:30.823 "base_bdevs_list": [ 00:09:30.823 { 00:09:30.823 "name": "pt1", 00:09:30.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.823 "is_configured": true, 00:09:30.823 "data_offset": 2048, 00:09:30.823 "data_size": 63488 00:09:30.823 }, 00:09:30.823 { 00:09:30.823 "name": "pt2", 00:09:30.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.823 "is_configured": true, 00:09:30.823 "data_offset": 2048, 00:09:30.823 "data_size": 63488 00:09:30.823 }, 00:09:30.823 { 00:09:30.823 "name": "pt3", 00:09:30.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.823 "is_configured": true, 00:09:30.823 "data_offset": 2048, 00:09:30.823 "data_size": 63488 00:09:30.823 } 00:09:30.823 ] 00:09:30.823 } 00:09:30.823 } 00:09:30.823 }' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:30.823 pt2 00:09:30.823 pt3' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:30.823 [2024-11-20 03:16:20.431923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.823 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8c8517dc-e55c-4c31-90a0-e1dd972bff74 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8c8517dc-e55c-4c31-90a0-e1dd972bff74 ']' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 [2024-11-20 03:16:20.479526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.084 [2024-11-20 03:16:20.479562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.084 [2024-11-20 03:16:20.479703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.084 [2024-11-20 03:16:20.479783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.084 [2024-11-20 03:16:20.479794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 [2024-11-20 03:16:20.627312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:31.084 [2024-11-20 03:16:20.629439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:31.084 [2024-11-20 03:16:20.629494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:31.084 [2024-11-20 03:16:20.629547] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:31.084 [2024-11-20 03:16:20.629623] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:31.084 [2024-11-20 03:16:20.629647] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:31.084 [2024-11-20 03:16:20.629665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.084 [2024-11-20 03:16:20.629676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:31.084 request: 00:09:31.084 { 00:09:31.084 "name": "raid_bdev1", 00:09:31.084 "raid_level": "raid1", 00:09:31.084 "base_bdevs": [ 00:09:31.084 "malloc1", 00:09:31.084 "malloc2", 00:09:31.084 "malloc3" 00:09:31.084 ], 00:09:31.084 "superblock": false, 00:09:31.084 "method": "bdev_raid_create", 00:09:31.084 "req_id": 1 00:09:31.084 } 00:09:31.084 Got JSON-RPC error response 00:09:31.084 response: 00:09:31.084 { 00:09:31.084 "code": -17, 00:09:31.084 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:31.084 } 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 [2024-11-20 03:16:20.675142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:31.084 [2024-11-20 03:16:20.675270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.084 [2024-11-20 03:16:20.675319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:31.084 [2024-11-20 03:16:20.675357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.084 [2024-11-20 03:16:20.677649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.084 [2024-11-20 03:16:20.677715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:31.084 [2024-11-20 03:16:20.677837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:31.084 [2024-11-20 03:16:20.677918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:31.084 pt1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.084 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.344 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.344 "name": "raid_bdev1", 00:09:31.344 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:31.344 "strip_size_kb": 0, 00:09:31.344 "state": "configuring", 00:09:31.344 "raid_level": "raid1", 00:09:31.344 "superblock": true, 00:09:31.344 "num_base_bdevs": 3, 00:09:31.344 "num_base_bdevs_discovered": 1, 00:09:31.344 "num_base_bdevs_operational": 3, 00:09:31.344 "base_bdevs_list": [ 00:09:31.344 { 00:09:31.344 "name": "pt1", 00:09:31.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.344 "is_configured": true, 00:09:31.344 "data_offset": 2048, 00:09:31.344 "data_size": 63488 00:09:31.344 }, 00:09:31.344 { 00:09:31.344 "name": null, 00:09:31.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.344 "is_configured": false, 00:09:31.344 "data_offset": 2048, 00:09:31.344 "data_size": 63488 00:09:31.344 }, 00:09:31.344 { 00:09:31.344 "name": null, 00:09:31.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.344 "is_configured": false, 00:09:31.344 "data_offset": 2048, 00:09:31.344 "data_size": 63488 00:09:31.344 } 00:09:31.344 ] 00:09:31.344 }' 00:09:31.344 03:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.344 03:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.605 [2024-11-20 03:16:21.074493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.605 [2024-11-20 03:16:21.074683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.605 [2024-11-20 03:16:21.074730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:31.605 [2024-11-20 03:16:21.074762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.605 [2024-11-20 03:16:21.075291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.605 [2024-11-20 03:16:21.075355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.605 [2024-11-20 03:16:21.075477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.605 [2024-11-20 03:16:21.075532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.605 pt2 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.605 [2024-11-20 03:16:21.086472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.605 "name": "raid_bdev1", 00:09:31.605 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:31.605 "strip_size_kb": 0, 00:09:31.605 "state": "configuring", 00:09:31.605 "raid_level": "raid1", 00:09:31.605 "superblock": true, 00:09:31.605 "num_base_bdevs": 3, 00:09:31.605 "num_base_bdevs_discovered": 1, 00:09:31.605 "num_base_bdevs_operational": 3, 00:09:31.605 "base_bdevs_list": [ 00:09:31.605 { 00:09:31.605 "name": "pt1", 00:09:31.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.605 "is_configured": true, 00:09:31.605 "data_offset": 2048, 00:09:31.605 "data_size": 63488 00:09:31.605 }, 00:09:31.605 { 00:09:31.605 "name": null, 00:09:31.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.605 "is_configured": false, 00:09:31.605 "data_offset": 0, 00:09:31.605 "data_size": 63488 00:09:31.605 }, 00:09:31.605 { 00:09:31.605 "name": null, 00:09:31.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.605 "is_configured": false, 00:09:31.605 "data_offset": 2048, 00:09:31.605 "data_size": 63488 00:09:31.605 } 00:09:31.605 ] 00:09:31.605 }' 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.605 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.175 [2024-11-20 03:16:21.565658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.175 [2024-11-20 03:16:21.565798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.175 [2024-11-20 03:16:21.565834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:32.175 [2024-11-20 03:16:21.565878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.175 [2024-11-20 03:16:21.566397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.175 [2024-11-20 03:16:21.566474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.175 [2024-11-20 03:16:21.566602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:32.175 [2024-11-20 03:16:21.566697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.175 pt2 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.175 [2024-11-20 03:16:21.577606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:32.175 [2024-11-20 03:16:21.577718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.175 [2024-11-20 03:16:21.577754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:32.175 [2024-11-20 03:16:21.577809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.175 [2024-11-20 03:16:21.578253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.175 [2024-11-20 03:16:21.578322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:32.175 [2024-11-20 03:16:21.578420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:32.175 [2024-11-20 03:16:21.578472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:32.175 [2024-11-20 03:16:21.578685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.175 [2024-11-20 03:16:21.578736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.175 [2024-11-20 03:16:21.579009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:32.175 [2024-11-20 03:16:21.579225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.175 [2024-11-20 03:16:21.579271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:32.175 [2024-11-20 03:16:21.579479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.175 pt3 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.175 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.176 "name": "raid_bdev1", 00:09:32.176 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:32.176 "strip_size_kb": 0, 00:09:32.176 "state": "online", 00:09:32.176 "raid_level": "raid1", 00:09:32.176 "superblock": true, 00:09:32.176 "num_base_bdevs": 3, 00:09:32.176 "num_base_bdevs_discovered": 3, 00:09:32.176 "num_base_bdevs_operational": 3, 00:09:32.176 "base_bdevs_list": [ 00:09:32.176 { 00:09:32.176 "name": "pt1", 00:09:32.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.176 "is_configured": true, 00:09:32.176 "data_offset": 2048, 00:09:32.176 "data_size": 63488 00:09:32.176 }, 00:09:32.176 { 00:09:32.176 "name": "pt2", 00:09:32.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.176 "is_configured": true, 00:09:32.176 "data_offset": 2048, 00:09:32.176 "data_size": 63488 00:09:32.176 }, 00:09:32.176 { 00:09:32.176 "name": "pt3", 00:09:32.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.176 "is_configured": true, 00:09:32.176 "data_offset": 2048, 00:09:32.176 "data_size": 63488 00:09:32.176 } 00:09:32.176 ] 00:09:32.176 }' 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.176 03:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.436 [2024-11-20 03:16:22.041121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.436 "name": "raid_bdev1", 00:09:32.436 "aliases": [ 00:09:32.436 "8c8517dc-e55c-4c31-90a0-e1dd972bff74" 00:09:32.436 ], 00:09:32.436 "product_name": "Raid Volume", 00:09:32.436 "block_size": 512, 00:09:32.436 "num_blocks": 63488, 00:09:32.436 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:32.436 "assigned_rate_limits": { 00:09:32.436 "rw_ios_per_sec": 0, 00:09:32.436 "rw_mbytes_per_sec": 0, 00:09:32.436 "r_mbytes_per_sec": 0, 00:09:32.436 "w_mbytes_per_sec": 0 00:09:32.436 }, 00:09:32.436 "claimed": false, 00:09:32.436 "zoned": false, 00:09:32.436 "supported_io_types": { 00:09:32.436 "read": true, 00:09:32.436 "write": true, 00:09:32.436 "unmap": false, 00:09:32.436 "flush": false, 00:09:32.436 "reset": true, 00:09:32.436 "nvme_admin": false, 00:09:32.436 "nvme_io": false, 00:09:32.436 "nvme_io_md": false, 00:09:32.436 "write_zeroes": true, 00:09:32.436 "zcopy": false, 00:09:32.436 "get_zone_info": false, 00:09:32.436 "zone_management": false, 00:09:32.436 "zone_append": false, 00:09:32.436 "compare": false, 00:09:32.436 "compare_and_write": false, 00:09:32.436 "abort": false, 00:09:32.436 "seek_hole": false, 00:09:32.436 "seek_data": false, 00:09:32.436 "copy": false, 00:09:32.436 "nvme_iov_md": false 00:09:32.436 }, 00:09:32.436 "memory_domains": [ 00:09:32.436 { 00:09:32.436 "dma_device_id": "system", 00:09:32.436 "dma_device_type": 1 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.436 "dma_device_type": 2 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "dma_device_id": "system", 00:09:32.436 "dma_device_type": 1 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.436 "dma_device_type": 2 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "dma_device_id": "system", 00:09:32.436 "dma_device_type": 1 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.436 "dma_device_type": 2 00:09:32.436 } 00:09:32.436 ], 00:09:32.436 "driver_specific": { 00:09:32.436 "raid": { 00:09:32.436 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:32.436 "strip_size_kb": 0, 00:09:32.436 "state": "online", 00:09:32.436 "raid_level": "raid1", 00:09:32.436 "superblock": true, 00:09:32.436 "num_base_bdevs": 3, 00:09:32.436 "num_base_bdevs_discovered": 3, 00:09:32.436 "num_base_bdevs_operational": 3, 00:09:32.436 "base_bdevs_list": [ 00:09:32.436 { 00:09:32.436 "name": "pt1", 00:09:32.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.436 "is_configured": true, 00:09:32.436 "data_offset": 2048, 00:09:32.436 "data_size": 63488 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "name": "pt2", 00:09:32.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.436 "is_configured": true, 00:09:32.436 "data_offset": 2048, 00:09:32.436 "data_size": 63488 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "name": "pt3", 00:09:32.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.436 "is_configured": true, 00:09:32.436 "data_offset": 2048, 00:09:32.436 "data_size": 63488 00:09:32.436 } 00:09:32.436 ] 00:09:32.436 } 00:09:32.436 } 00:09:32.436 }' 00:09:32.436 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.696 pt2 00:09:32.696 pt3' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.696 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.697 [2024-11-20 03:16:22.304704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.697 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8c8517dc-e55c-4c31-90a0-e1dd972bff74 '!=' 8c8517dc-e55c-4c31-90a0-e1dd972bff74 ']' 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.957 [2024-11-20 03:16:22.348328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.957 "name": "raid_bdev1", 00:09:32.957 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:32.957 "strip_size_kb": 0, 00:09:32.957 "state": "online", 00:09:32.957 "raid_level": "raid1", 00:09:32.957 "superblock": true, 00:09:32.957 "num_base_bdevs": 3, 00:09:32.957 "num_base_bdevs_discovered": 2, 00:09:32.957 "num_base_bdevs_operational": 2, 00:09:32.957 "base_bdevs_list": [ 00:09:32.957 { 00:09:32.957 "name": null, 00:09:32.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.957 "is_configured": false, 00:09:32.957 "data_offset": 0, 00:09:32.957 "data_size": 63488 00:09:32.957 }, 00:09:32.957 { 00:09:32.957 "name": "pt2", 00:09:32.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.957 "is_configured": true, 00:09:32.957 "data_offset": 2048, 00:09:32.957 "data_size": 63488 00:09:32.957 }, 00:09:32.957 { 00:09:32.957 "name": "pt3", 00:09:32.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.957 "is_configured": true, 00:09:32.957 "data_offset": 2048, 00:09:32.957 "data_size": 63488 00:09:32.957 } 00:09:32.957 ] 00:09:32.957 }' 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.957 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.217 [2024-11-20 03:16:22.795527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.217 [2024-11-20 03:16:22.795638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.217 [2024-11-20 03:16:22.795747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.217 [2024-11-20 03:16:22.795843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.217 [2024-11-20 03:16:22.795886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.217 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.477 [2024-11-20 03:16:22.883349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.477 [2024-11-20 03:16:22.883494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.477 [2024-11-20 03:16:22.883533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:33.477 [2024-11-20 03:16:22.883570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.477 [2024-11-20 03:16:22.885842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.477 [2024-11-20 03:16:22.885923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.477 [2024-11-20 03:16:22.886030] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.477 [2024-11-20 03:16:22.886118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.477 pt2 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.477 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.477 "name": "raid_bdev1", 00:09:33.477 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:33.477 "strip_size_kb": 0, 00:09:33.477 "state": "configuring", 00:09:33.477 "raid_level": "raid1", 00:09:33.477 "superblock": true, 00:09:33.477 "num_base_bdevs": 3, 00:09:33.477 "num_base_bdevs_discovered": 1, 00:09:33.477 "num_base_bdevs_operational": 2, 00:09:33.477 "base_bdevs_list": [ 00:09:33.477 { 00:09:33.477 "name": null, 00:09:33.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.477 "is_configured": false, 00:09:33.477 "data_offset": 2048, 00:09:33.477 "data_size": 63488 00:09:33.477 }, 00:09:33.477 { 00:09:33.477 "name": "pt2", 00:09:33.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.477 "is_configured": true, 00:09:33.477 "data_offset": 2048, 00:09:33.477 "data_size": 63488 00:09:33.477 }, 00:09:33.477 { 00:09:33.477 "name": null, 00:09:33.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.477 "is_configured": false, 00:09:33.477 "data_offset": 2048, 00:09:33.477 "data_size": 63488 00:09:33.478 } 00:09:33.478 ] 00:09:33.478 }' 00:09:33.478 03:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.478 03:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.737 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:33.737 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:33.737 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:33.737 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.737 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.738 [2024-11-20 03:16:23.330659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.738 [2024-11-20 03:16:23.330733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.738 [2024-11-20 03:16:23.330772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:33.738 [2024-11-20 03:16:23.330784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.738 [2024-11-20 03:16:23.331287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.738 [2024-11-20 03:16:23.331320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.738 [2024-11-20 03:16:23.331421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:33.738 [2024-11-20 03:16:23.331453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.738 [2024-11-20 03:16:23.331593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.738 [2024-11-20 03:16:23.331630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.738 [2024-11-20 03:16:23.331921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:33.738 [2024-11-20 03:16:23.332165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.738 [2024-11-20 03:16:23.332180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:33.738 [2024-11-20 03:16:23.332351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.738 pt3 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.738 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.998 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.998 "name": "raid_bdev1", 00:09:33.998 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:33.998 "strip_size_kb": 0, 00:09:33.998 "state": "online", 00:09:33.998 "raid_level": "raid1", 00:09:33.998 "superblock": true, 00:09:33.998 "num_base_bdevs": 3, 00:09:33.998 "num_base_bdevs_discovered": 2, 00:09:33.998 "num_base_bdevs_operational": 2, 00:09:33.998 "base_bdevs_list": [ 00:09:33.998 { 00:09:33.998 "name": null, 00:09:33.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.998 "is_configured": false, 00:09:33.998 "data_offset": 2048, 00:09:33.998 "data_size": 63488 00:09:33.998 }, 00:09:33.998 { 00:09:33.998 "name": "pt2", 00:09:33.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.998 "is_configured": true, 00:09:33.998 "data_offset": 2048, 00:09:33.998 "data_size": 63488 00:09:33.998 }, 00:09:33.998 { 00:09:33.998 "name": "pt3", 00:09:33.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.998 "is_configured": true, 00:09:33.998 "data_offset": 2048, 00:09:33.998 "data_size": 63488 00:09:33.998 } 00:09:33.998 ] 00:09:33.998 }' 00:09:33.998 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.998 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.258 [2024-11-20 03:16:23.757920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.258 [2024-11-20 03:16:23.758018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.258 [2024-11-20 03:16:23.758121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.258 [2024-11-20 03:16:23.758223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.258 [2024-11-20 03:16:23.758272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.258 [2024-11-20 03:16:23.833801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.258 [2024-11-20 03:16:23.833902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.258 [2024-11-20 03:16:23.833968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:34.258 [2024-11-20 03:16:23.833997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.258 [2024-11-20 03:16:23.836335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.258 [2024-11-20 03:16:23.836407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.258 [2024-11-20 03:16:23.836498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.258 [2024-11-20 03:16:23.836566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.258 [2024-11-20 03:16:23.836716] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:34.258 [2024-11-20 03:16:23.836727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.258 [2024-11-20 03:16:23.836743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:34.258 [2024-11-20 03:16:23.836824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.258 pt1 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.258 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.518 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.518 "name": "raid_bdev1", 00:09:34.518 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:34.518 "strip_size_kb": 0, 00:09:34.518 "state": "configuring", 00:09:34.518 "raid_level": "raid1", 00:09:34.518 "superblock": true, 00:09:34.518 "num_base_bdevs": 3, 00:09:34.518 "num_base_bdevs_discovered": 1, 00:09:34.518 "num_base_bdevs_operational": 2, 00:09:34.518 "base_bdevs_list": [ 00:09:34.518 { 00:09:34.518 "name": null, 00:09:34.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.518 "is_configured": false, 00:09:34.518 "data_offset": 2048, 00:09:34.518 "data_size": 63488 00:09:34.518 }, 00:09:34.518 { 00:09:34.518 "name": "pt2", 00:09:34.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.518 "is_configured": true, 00:09:34.518 "data_offset": 2048, 00:09:34.518 "data_size": 63488 00:09:34.518 }, 00:09:34.518 { 00:09:34.518 "name": null, 00:09:34.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.518 "is_configured": false, 00:09:34.518 "data_offset": 2048, 00:09:34.518 "data_size": 63488 00:09:34.518 } 00:09:34.518 ] 00:09:34.518 }' 00:09:34.518 03:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.518 03:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.779 [2024-11-20 03:16:24.305050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:34.779 [2024-11-20 03:16:24.305192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.779 [2024-11-20 03:16:24.305233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:34.779 [2024-11-20 03:16:24.305261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.779 [2024-11-20 03:16:24.305805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.779 [2024-11-20 03:16:24.305870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:34.779 [2024-11-20 03:16:24.305998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:34.779 [2024-11-20 03:16:24.306081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:34.779 [2024-11-20 03:16:24.306272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:34.779 [2024-11-20 03:16:24.306314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.779 [2024-11-20 03:16:24.306629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:34.779 [2024-11-20 03:16:24.306848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:34.779 [2024-11-20 03:16:24.306899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:34.779 [2024-11-20 03:16:24.307115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.779 pt3 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.779 "name": "raid_bdev1", 00:09:34.779 "uuid": "8c8517dc-e55c-4c31-90a0-e1dd972bff74", 00:09:34.779 "strip_size_kb": 0, 00:09:34.779 "state": "online", 00:09:34.779 "raid_level": "raid1", 00:09:34.779 "superblock": true, 00:09:34.779 "num_base_bdevs": 3, 00:09:34.779 "num_base_bdevs_discovered": 2, 00:09:34.779 "num_base_bdevs_operational": 2, 00:09:34.779 "base_bdevs_list": [ 00:09:34.779 { 00:09:34.779 "name": null, 00:09:34.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.779 "is_configured": false, 00:09:34.779 "data_offset": 2048, 00:09:34.779 "data_size": 63488 00:09:34.779 }, 00:09:34.779 { 00:09:34.779 "name": "pt2", 00:09:34.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.779 "is_configured": true, 00:09:34.779 "data_offset": 2048, 00:09:34.779 "data_size": 63488 00:09:34.779 }, 00:09:34.779 { 00:09:34.779 "name": "pt3", 00:09:34.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.779 "is_configured": true, 00:09:34.779 "data_offset": 2048, 00:09:34.779 "data_size": 63488 00:09:34.779 } 00:09:34.779 ] 00:09:34.779 }' 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.779 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:35.350 [2024-11-20 03:16:24.816460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8c8517dc-e55c-4c31-90a0-e1dd972bff74 '!=' 8c8517dc-e55c-4c31-90a0-e1dd972bff74 ']' 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68498 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68498 ']' 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68498 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68498 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.350 killing process with pid 68498 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68498' 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68498 00:09:35.350 [2024-11-20 03:16:24.903931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.350 [2024-11-20 03:16:24.904055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.350 03:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68498 00:09:35.350 [2024-11-20 03:16:24.904123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.350 [2024-11-20 03:16:24.904137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:35.609 [2024-11-20 03:16:25.207138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.008 03:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.008 00:09:37.008 real 0m7.699s 00:09:37.008 user 0m12.013s 00:09:37.008 sys 0m1.387s 00:09:37.008 03:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.008 ************************************ 00:09:37.008 END TEST raid_superblock_test 00:09:37.008 ************************************ 00:09:37.008 03:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.008 03:16:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:37.008 03:16:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.008 03:16:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.008 03:16:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.008 ************************************ 00:09:37.008 START TEST raid_read_error_test 00:09:37.008 ************************************ 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.roMFVBVu4v 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68939 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68939 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68939 ']' 00:09:37.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.008 03:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.008 [2024-11-20 03:16:26.490128] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:37.009 [2024-11-20 03:16:26.490250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68939 ] 00:09:37.267 [2024-11-20 03:16:26.647174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.267 [2024-11-20 03:16:26.761423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.525 [2024-11-20 03:16:26.966791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.525 [2024-11-20 03:16:26.966947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 BaseBdev1_malloc 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 true 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 [2024-11-20 03:16:27.385579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:37.783 [2024-11-20 03:16:27.385650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.783 [2024-11-20 03:16:27.385671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:37.783 [2024-11-20 03:16:27.385690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.783 [2024-11-20 03:16:27.387917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.784 [2024-11-20 03:16:27.388002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:37.784 BaseBdev1 00:09:37.784 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.784 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.784 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:37.784 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.784 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.042 BaseBdev2_malloc 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.042 true 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.042 [2024-11-20 03:16:27.449787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.042 [2024-11-20 03:16:27.449858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.042 [2024-11-20 03:16:27.449896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.042 [2024-11-20 03:16:27.449907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.042 [2024-11-20 03:16:27.452154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.042 [2024-11-20 03:16:27.452198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.042 BaseBdev2 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.042 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.043 BaseBdev3_malloc 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.043 true 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.043 [2024-11-20 03:16:27.530914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.043 [2024-11-20 03:16:27.530969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.043 [2024-11-20 03:16:27.530987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:38.043 [2024-11-20 03:16:27.530997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.043 [2024-11-20 03:16:27.533164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.043 [2024-11-20 03:16:27.533257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.043 BaseBdev3 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.043 [2024-11-20 03:16:27.542963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.043 [2024-11-20 03:16:27.544783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.043 [2024-11-20 03:16:27.544858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.043 [2024-11-20 03:16:27.545061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.043 [2024-11-20 03:16:27.545074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.043 [2024-11-20 03:16:27.545315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:38.043 [2024-11-20 03:16:27.545500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.043 [2024-11-20 03:16:27.545513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:38.043 [2024-11-20 03:16:27.545669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.043 "name": "raid_bdev1", 00:09:38.043 "uuid": "134c6ef4-b0cf-4266-a5f0-26cc70eeb8f3", 00:09:38.043 "strip_size_kb": 0, 00:09:38.043 "state": "online", 00:09:38.043 "raid_level": "raid1", 00:09:38.043 "superblock": true, 00:09:38.043 "num_base_bdevs": 3, 00:09:38.043 "num_base_bdevs_discovered": 3, 00:09:38.043 "num_base_bdevs_operational": 3, 00:09:38.043 "base_bdevs_list": [ 00:09:38.043 { 00:09:38.043 "name": "BaseBdev1", 00:09:38.043 "uuid": "c9fdb468-df8b-585b-8907-49a7ac19f8ff", 00:09:38.043 "is_configured": true, 00:09:38.043 "data_offset": 2048, 00:09:38.043 "data_size": 63488 00:09:38.043 }, 00:09:38.043 { 00:09:38.043 "name": "BaseBdev2", 00:09:38.043 "uuid": "513da04c-8966-5cdc-af1f-41aa0d549c92", 00:09:38.043 "is_configured": true, 00:09:38.043 "data_offset": 2048, 00:09:38.043 "data_size": 63488 00:09:38.043 }, 00:09:38.043 { 00:09:38.043 "name": "BaseBdev3", 00:09:38.043 "uuid": "9b277281-b7a2-5ac4-ab1e-b768b6c89c37", 00:09:38.043 "is_configured": true, 00:09:38.043 "data_offset": 2048, 00:09:38.043 "data_size": 63488 00:09:38.043 } 00:09:38.043 ] 00:09:38.043 }' 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.043 03:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.613 03:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:38.613 03:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:38.613 [2024-11-20 03:16:28.115298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.552 "name": "raid_bdev1", 00:09:39.552 "uuid": "134c6ef4-b0cf-4266-a5f0-26cc70eeb8f3", 00:09:39.552 "strip_size_kb": 0, 00:09:39.552 "state": "online", 00:09:39.552 "raid_level": "raid1", 00:09:39.552 "superblock": true, 00:09:39.552 "num_base_bdevs": 3, 00:09:39.552 "num_base_bdevs_discovered": 3, 00:09:39.552 "num_base_bdevs_operational": 3, 00:09:39.552 "base_bdevs_list": [ 00:09:39.552 { 00:09:39.552 "name": "BaseBdev1", 00:09:39.552 "uuid": "c9fdb468-df8b-585b-8907-49a7ac19f8ff", 00:09:39.552 "is_configured": true, 00:09:39.552 "data_offset": 2048, 00:09:39.552 "data_size": 63488 00:09:39.552 }, 00:09:39.552 { 00:09:39.552 "name": "BaseBdev2", 00:09:39.552 "uuid": "513da04c-8966-5cdc-af1f-41aa0d549c92", 00:09:39.552 "is_configured": true, 00:09:39.552 "data_offset": 2048, 00:09:39.552 "data_size": 63488 00:09:39.552 }, 00:09:39.552 { 00:09:39.552 "name": "BaseBdev3", 00:09:39.552 "uuid": "9b277281-b7a2-5ac4-ab1e-b768b6c89c37", 00:09:39.552 "is_configured": true, 00:09:39.552 "data_offset": 2048, 00:09:39.552 "data_size": 63488 00:09:39.552 } 00:09:39.552 ] 00:09:39.552 }' 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.552 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.119 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.119 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.119 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.119 [2024-11-20 03:16:29.469132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.119 [2024-11-20 03:16:29.469169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.119 [2024-11-20 03:16:29.472103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.119 [2024-11-20 03:16:29.472156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.119 [2024-11-20 03:16:29.472256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.119 [2024-11-20 03:16:29.472266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:40.120 { 00:09:40.120 "results": [ 00:09:40.120 { 00:09:40.120 "job": "raid_bdev1", 00:09:40.120 "core_mask": "0x1", 00:09:40.120 "workload": "randrw", 00:09:40.120 "percentage": 50, 00:09:40.120 "status": "finished", 00:09:40.120 "queue_depth": 1, 00:09:40.120 "io_size": 131072, 00:09:40.120 "runtime": 1.35457, 00:09:40.120 "iops": 13103.78939442037, 00:09:40.120 "mibps": 1637.9736743025462, 00:09:40.120 "io_failed": 0, 00:09:40.120 "io_timeout": 0, 00:09:40.120 "avg_latency_us": 73.67132384525495, 00:09:40.120 "min_latency_us": 23.699563318777294, 00:09:40.120 "max_latency_us": 1688.482096069869 00:09:40.120 } 00:09:40.120 ], 00:09:40.120 "core_count": 1 00:09:40.120 } 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68939 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68939 ']' 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68939 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68939 00:09:40.120 killing process with pid 68939 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68939' 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68939 00:09:40.120 [2024-11-20 03:16:29.510908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.120 03:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68939 00:09:40.120 [2024-11-20 03:16:29.743351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.roMFVBVu4v 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:41.500 00:09:41.500 real 0m4.527s 00:09:41.500 user 0m5.415s 00:09:41.500 sys 0m0.531s 00:09:41.500 ************************************ 00:09:41.500 END TEST raid_read_error_test 00:09:41.500 ************************************ 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.500 03:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 03:16:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:41.500 03:16:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.500 03:16:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.500 03:16:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 ************************************ 00:09:41.500 START TEST raid_write_error_test 00:09:41.500 ************************************ 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q6KvIsh092 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69090 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69090 00:09:41.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69090 ']' 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.500 03:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 [2024-11-20 03:16:31.079712] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:41.500 [2024-11-20 03:16:31.079912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69090 ] 00:09:41.760 [2024-11-20 03:16:31.254787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.760 [2024-11-20 03:16:31.369497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.019 [2024-11-20 03:16:31.565911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.019 [2024-11-20 03:16:31.565955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.589 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 BaseBdev1_malloc 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 true 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 [2024-11-20 03:16:31.982391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.590 [2024-11-20 03:16:31.982462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.590 [2024-11-20 03:16:31.982487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.590 [2024-11-20 03:16:31.982507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.590 [2024-11-20 03:16:31.984954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.590 [2024-11-20 03:16:31.985012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.590 BaseBdev1 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 BaseBdev2_malloc 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 true 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 [2024-11-20 03:16:32.049441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.590 [2024-11-20 03:16:32.049500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.590 [2024-11-20 03:16:32.049519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.590 [2024-11-20 03:16:32.049530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.590 [2024-11-20 03:16:32.051642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.590 [2024-11-20 03:16:32.051679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.590 BaseBdev2 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 BaseBdev3_malloc 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 true 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 [2024-11-20 03:16:32.125481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:42.590 [2024-11-20 03:16:32.125540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.590 [2024-11-20 03:16:32.125577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:42.590 [2024-11-20 03:16:32.125587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.590 [2024-11-20 03:16:32.127816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.590 [2024-11-20 03:16:32.127859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:42.590 BaseBdev3 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 [2024-11-20 03:16:32.137535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.590 [2024-11-20 03:16:32.139489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.590 [2024-11-20 03:16:32.139647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.590 [2024-11-20 03:16:32.139925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.590 [2024-11-20 03:16:32.139981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.590 [2024-11-20 03:16:32.140299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:42.590 [2024-11-20 03:16:32.140521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.590 [2024-11-20 03:16:32.140569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:42.590 [2024-11-20 03:16:32.140783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.590 "name": "raid_bdev1", 00:09:42.590 "uuid": "07d9d298-39dd-4b19-8665-a53e2560df34", 00:09:42.590 "strip_size_kb": 0, 00:09:42.590 "state": "online", 00:09:42.590 "raid_level": "raid1", 00:09:42.590 "superblock": true, 00:09:42.590 "num_base_bdevs": 3, 00:09:42.590 "num_base_bdevs_discovered": 3, 00:09:42.590 "num_base_bdevs_operational": 3, 00:09:42.590 "base_bdevs_list": [ 00:09:42.590 { 00:09:42.590 "name": "BaseBdev1", 00:09:42.590 "uuid": "28961b9c-d70f-5113-bd55-50fd3de37033", 00:09:42.590 "is_configured": true, 00:09:42.590 "data_offset": 2048, 00:09:42.590 "data_size": 63488 00:09:42.590 }, 00:09:42.590 { 00:09:42.590 "name": "BaseBdev2", 00:09:42.590 "uuid": "14c46bab-c576-57d2-aa4e-8621a079e7f1", 00:09:42.590 "is_configured": true, 00:09:42.590 "data_offset": 2048, 00:09:42.590 "data_size": 63488 00:09:42.590 }, 00:09:42.590 { 00:09:42.590 "name": "BaseBdev3", 00:09:42.590 "uuid": "99336db1-8cf2-5dde-ab53-7c71eb08f8db", 00:09:42.590 "is_configured": true, 00:09:42.590 "data_offset": 2048, 00:09:42.590 "data_size": 63488 00:09:42.590 } 00:09:42.590 ] 00:09:42.590 }' 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.590 03:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.160 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:43.160 03:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:43.160 [2024-11-20 03:16:32.685930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.108 [2024-11-20 03:16:33.600955] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:44.108 [2024-11-20 03:16:33.601014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.108 [2024-11-20 03:16:33.601223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.108 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.108 "name": "raid_bdev1", 00:09:44.108 "uuid": "07d9d298-39dd-4b19-8665-a53e2560df34", 00:09:44.108 "strip_size_kb": 0, 00:09:44.108 "state": "online", 00:09:44.108 "raid_level": "raid1", 00:09:44.108 "superblock": true, 00:09:44.108 "num_base_bdevs": 3, 00:09:44.108 "num_base_bdevs_discovered": 2, 00:09:44.108 "num_base_bdevs_operational": 2, 00:09:44.108 "base_bdevs_list": [ 00:09:44.108 { 00:09:44.108 "name": null, 00:09:44.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.108 "is_configured": false, 00:09:44.108 "data_offset": 0, 00:09:44.109 "data_size": 63488 00:09:44.109 }, 00:09:44.109 { 00:09:44.109 "name": "BaseBdev2", 00:09:44.109 "uuid": "14c46bab-c576-57d2-aa4e-8621a079e7f1", 00:09:44.109 "is_configured": true, 00:09:44.109 "data_offset": 2048, 00:09:44.109 "data_size": 63488 00:09:44.109 }, 00:09:44.109 { 00:09:44.109 "name": "BaseBdev3", 00:09:44.109 "uuid": "99336db1-8cf2-5dde-ab53-7c71eb08f8db", 00:09:44.109 "is_configured": true, 00:09:44.109 "data_offset": 2048, 00:09:44.109 "data_size": 63488 00:09:44.109 } 00:09:44.109 ] 00:09:44.109 }' 00:09:44.109 03:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.109 03:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.679 [2024-11-20 03:16:34.031210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.679 [2024-11-20 03:16:34.031325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.679 [2024-11-20 03:16:34.033948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.679 [2024-11-20 03:16:34.034054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.679 [2024-11-20 03:16:34.034152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.679 [2024-11-20 03:16:34.034231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:44.679 { 00:09:44.679 "results": [ 00:09:44.679 { 00:09:44.679 "job": "raid_bdev1", 00:09:44.679 "core_mask": "0x1", 00:09:44.679 "workload": "randrw", 00:09:44.679 "percentage": 50, 00:09:44.679 "status": "finished", 00:09:44.679 "queue_depth": 1, 00:09:44.679 "io_size": 131072, 00:09:44.679 "runtime": 1.345936, 00:09:44.679 "iops": 14426.391745224142, 00:09:44.679 "mibps": 1803.2989681530178, 00:09:44.679 "io_failed": 0, 00:09:44.679 "io_timeout": 0, 00:09:44.679 "avg_latency_us": 66.69213512761631, 00:09:44.679 "min_latency_us": 23.699563318777294, 00:09:44.679 "max_latency_us": 1473.844541484716 00:09:44.679 } 00:09:44.679 ], 00:09:44.679 "core_count": 1 00:09:44.679 } 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69090 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69090 ']' 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69090 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69090 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.679 killing process with pid 69090 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69090' 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69090 00:09:44.679 [2024-11-20 03:16:34.078733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.679 03:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69090 00:09:44.679 [2024-11-20 03:16:34.310177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q6KvIsh092 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:46.098 ************************************ 00:09:46.098 END TEST raid_write_error_test 00:09:46.098 ************************************ 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:46.098 00:09:46.098 real 0m4.508s 00:09:46.098 user 0m5.309s 00:09:46.098 sys 0m0.589s 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.098 03:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.098 03:16:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:46.098 03:16:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:46.098 03:16:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:46.098 03:16:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.098 03:16:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.098 03:16:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.098 ************************************ 00:09:46.098 START TEST raid_state_function_test 00:09:46.098 ************************************ 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:46.098 Process raid pid: 69228 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69228 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69228' 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69228 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69228 ']' 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.098 03:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.098 [2024-11-20 03:16:35.655199] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:46.098 [2024-11-20 03:16:35.655339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.358 [2024-11-20 03:16:35.828641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.358 [2024-11-20 03:16:35.948769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.617 [2024-11-20 03:16:36.163220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.618 [2024-11-20 03:16:36.163269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.877 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.877 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.877 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.877 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 [2024-11-20 03:16:36.507652] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.877 [2024-11-20 03:16:36.507811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.877 [2024-11-20 03:16:36.507828] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.877 [2024-11-20 03:16:36.507840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.877 [2024-11-20 03:16:36.507848] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.877 [2024-11-20 03:16:36.507857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.877 [2024-11-20 03:16:36.507865] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.877 [2024-11-20 03:16:36.507874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.136 "name": "Existed_Raid", 00:09:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.136 "strip_size_kb": 64, 00:09:47.136 "state": "configuring", 00:09:47.136 "raid_level": "raid0", 00:09:47.136 "superblock": false, 00:09:47.136 "num_base_bdevs": 4, 00:09:47.136 "num_base_bdevs_discovered": 0, 00:09:47.136 "num_base_bdevs_operational": 4, 00:09:47.136 "base_bdevs_list": [ 00:09:47.136 { 00:09:47.136 "name": "BaseBdev1", 00:09:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.136 "is_configured": false, 00:09:47.136 "data_offset": 0, 00:09:47.136 "data_size": 0 00:09:47.136 }, 00:09:47.136 { 00:09:47.136 "name": "BaseBdev2", 00:09:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.136 "is_configured": false, 00:09:47.136 "data_offset": 0, 00:09:47.136 "data_size": 0 00:09:47.136 }, 00:09:47.136 { 00:09:47.136 "name": "BaseBdev3", 00:09:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.136 "is_configured": false, 00:09:47.136 "data_offset": 0, 00:09:47.136 "data_size": 0 00:09:47.136 }, 00:09:47.136 { 00:09:47.136 "name": "BaseBdev4", 00:09:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.136 "is_configured": false, 00:09:47.136 "data_offset": 0, 00:09:47.136 "data_size": 0 00:09:47.136 } 00:09:47.136 ] 00:09:47.136 }' 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.136 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [2024-11-20 03:16:36.930854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.397 [2024-11-20 03:16:36.930961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [2024-11-20 03:16:36.938824] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.397 [2024-11-20 03:16:36.938908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.397 [2024-11-20 03:16:36.938935] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.397 [2024-11-20 03:16:36.938958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.397 [2024-11-20 03:16:36.938976] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.397 [2024-11-20 03:16:36.938997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.397 [2024-11-20 03:16:36.939015] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.397 [2024-11-20 03:16:36.939036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [2024-11-20 03:16:36.986919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.397 BaseBdev1 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 03:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [ 00:09:47.397 { 00:09:47.397 "name": "BaseBdev1", 00:09:47.397 "aliases": [ 00:09:47.397 "76cf2b53-b8ec-4d8a-a648-31228d617fa6" 00:09:47.397 ], 00:09:47.397 "product_name": "Malloc disk", 00:09:47.397 "block_size": 512, 00:09:47.397 "num_blocks": 65536, 00:09:47.397 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:47.397 "assigned_rate_limits": { 00:09:47.397 "rw_ios_per_sec": 0, 00:09:47.397 "rw_mbytes_per_sec": 0, 00:09:47.397 "r_mbytes_per_sec": 0, 00:09:47.397 "w_mbytes_per_sec": 0 00:09:47.397 }, 00:09:47.397 "claimed": true, 00:09:47.397 "claim_type": "exclusive_write", 00:09:47.397 "zoned": false, 00:09:47.397 "supported_io_types": { 00:09:47.397 "read": true, 00:09:47.397 "write": true, 00:09:47.397 "unmap": true, 00:09:47.397 "flush": true, 00:09:47.397 "reset": true, 00:09:47.397 "nvme_admin": false, 00:09:47.397 "nvme_io": false, 00:09:47.397 "nvme_io_md": false, 00:09:47.397 "write_zeroes": true, 00:09:47.397 "zcopy": true, 00:09:47.397 "get_zone_info": false, 00:09:47.397 "zone_management": false, 00:09:47.397 "zone_append": false, 00:09:47.397 "compare": false, 00:09:47.397 "compare_and_write": false, 00:09:47.397 "abort": true, 00:09:47.397 "seek_hole": false, 00:09:47.397 "seek_data": false, 00:09:47.397 "copy": true, 00:09:47.397 "nvme_iov_md": false 00:09:47.397 }, 00:09:47.397 "memory_domains": [ 00:09:47.397 { 00:09:47.397 "dma_device_id": "system", 00:09:47.397 "dma_device_type": 1 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.397 "dma_device_type": 2 00:09:47.397 } 00:09:47.397 ], 00:09:47.397 "driver_specific": {} 00:09:47.397 } 00:09:47.397 ] 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.397 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.657 "name": "Existed_Raid", 00:09:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.657 "strip_size_kb": 64, 00:09:47.657 "state": "configuring", 00:09:47.657 "raid_level": "raid0", 00:09:47.657 "superblock": false, 00:09:47.657 "num_base_bdevs": 4, 00:09:47.657 "num_base_bdevs_discovered": 1, 00:09:47.657 "num_base_bdevs_operational": 4, 00:09:47.657 "base_bdevs_list": [ 00:09:47.657 { 00:09:47.657 "name": "BaseBdev1", 00:09:47.657 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:47.657 "is_configured": true, 00:09:47.657 "data_offset": 0, 00:09:47.657 "data_size": 65536 00:09:47.657 }, 00:09:47.657 { 00:09:47.657 "name": "BaseBdev2", 00:09:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.657 "is_configured": false, 00:09:47.657 "data_offset": 0, 00:09:47.657 "data_size": 0 00:09:47.657 }, 00:09:47.657 { 00:09:47.657 "name": "BaseBdev3", 00:09:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.657 "is_configured": false, 00:09:47.657 "data_offset": 0, 00:09:47.657 "data_size": 0 00:09:47.657 }, 00:09:47.657 { 00:09:47.657 "name": "BaseBdev4", 00:09:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.657 "is_configured": false, 00:09:47.657 "data_offset": 0, 00:09:47.657 "data_size": 0 00:09:47.657 } 00:09:47.657 ] 00:09:47.657 }' 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.657 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.917 [2024-11-20 03:16:37.506146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.917 [2024-11-20 03:16:37.506208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.917 [2024-11-20 03:16:37.518159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.917 [2024-11-20 03:16:37.520142] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.917 [2024-11-20 03:16:37.520193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.917 [2024-11-20 03:16:37.520204] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.917 [2024-11-20 03:16:37.520216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.917 [2024-11-20 03:16:37.520224] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.917 [2024-11-20 03:16:37.520233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.917 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.177 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.177 "name": "Existed_Raid", 00:09:48.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.177 "strip_size_kb": 64, 00:09:48.177 "state": "configuring", 00:09:48.177 "raid_level": "raid0", 00:09:48.177 "superblock": false, 00:09:48.177 "num_base_bdevs": 4, 00:09:48.177 "num_base_bdevs_discovered": 1, 00:09:48.177 "num_base_bdevs_operational": 4, 00:09:48.177 "base_bdevs_list": [ 00:09:48.177 { 00:09:48.177 "name": "BaseBdev1", 00:09:48.177 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:48.177 "is_configured": true, 00:09:48.177 "data_offset": 0, 00:09:48.177 "data_size": 65536 00:09:48.177 }, 00:09:48.177 { 00:09:48.177 "name": "BaseBdev2", 00:09:48.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.177 "is_configured": false, 00:09:48.177 "data_offset": 0, 00:09:48.177 "data_size": 0 00:09:48.177 }, 00:09:48.177 { 00:09:48.177 "name": "BaseBdev3", 00:09:48.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.177 "is_configured": false, 00:09:48.177 "data_offset": 0, 00:09:48.177 "data_size": 0 00:09:48.177 }, 00:09:48.177 { 00:09:48.177 "name": "BaseBdev4", 00:09:48.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.177 "is_configured": false, 00:09:48.177 "data_offset": 0, 00:09:48.177 "data_size": 0 00:09:48.177 } 00:09:48.177 ] 00:09:48.177 }' 00:09:48.177 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.177 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 03:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.436 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.436 03:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 [2024-11-20 03:16:38.042251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.436 BaseBdev2 00:09:48.436 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.437 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.696 [ 00:09:48.696 { 00:09:48.696 "name": "BaseBdev2", 00:09:48.696 "aliases": [ 00:09:48.696 "6c8f757f-72f7-4238-8271-fe61c3cbe819" 00:09:48.696 ], 00:09:48.696 "product_name": "Malloc disk", 00:09:48.696 "block_size": 512, 00:09:48.696 "num_blocks": 65536, 00:09:48.696 "uuid": "6c8f757f-72f7-4238-8271-fe61c3cbe819", 00:09:48.696 "assigned_rate_limits": { 00:09:48.696 "rw_ios_per_sec": 0, 00:09:48.696 "rw_mbytes_per_sec": 0, 00:09:48.696 "r_mbytes_per_sec": 0, 00:09:48.696 "w_mbytes_per_sec": 0 00:09:48.696 }, 00:09:48.696 "claimed": true, 00:09:48.696 "claim_type": "exclusive_write", 00:09:48.696 "zoned": false, 00:09:48.696 "supported_io_types": { 00:09:48.696 "read": true, 00:09:48.696 "write": true, 00:09:48.696 "unmap": true, 00:09:48.696 "flush": true, 00:09:48.696 "reset": true, 00:09:48.696 "nvme_admin": false, 00:09:48.696 "nvme_io": false, 00:09:48.696 "nvme_io_md": false, 00:09:48.696 "write_zeroes": true, 00:09:48.696 "zcopy": true, 00:09:48.696 "get_zone_info": false, 00:09:48.696 "zone_management": false, 00:09:48.696 "zone_append": false, 00:09:48.696 "compare": false, 00:09:48.696 "compare_and_write": false, 00:09:48.696 "abort": true, 00:09:48.696 "seek_hole": false, 00:09:48.696 "seek_data": false, 00:09:48.696 "copy": true, 00:09:48.696 "nvme_iov_md": false 00:09:48.696 }, 00:09:48.696 "memory_domains": [ 00:09:48.696 { 00:09:48.696 "dma_device_id": "system", 00:09:48.696 "dma_device_type": 1 00:09:48.696 }, 00:09:48.696 { 00:09:48.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.696 "dma_device_type": 2 00:09:48.696 } 00:09:48.696 ], 00:09:48.696 "driver_specific": {} 00:09:48.696 } 00:09:48.696 ] 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.696 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.697 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.697 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.697 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.697 "name": "Existed_Raid", 00:09:48.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.697 "strip_size_kb": 64, 00:09:48.697 "state": "configuring", 00:09:48.697 "raid_level": "raid0", 00:09:48.697 "superblock": false, 00:09:48.697 "num_base_bdevs": 4, 00:09:48.697 "num_base_bdevs_discovered": 2, 00:09:48.697 "num_base_bdevs_operational": 4, 00:09:48.697 "base_bdevs_list": [ 00:09:48.697 { 00:09:48.697 "name": "BaseBdev1", 00:09:48.697 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:48.697 "is_configured": true, 00:09:48.697 "data_offset": 0, 00:09:48.697 "data_size": 65536 00:09:48.697 }, 00:09:48.697 { 00:09:48.697 "name": "BaseBdev2", 00:09:48.697 "uuid": "6c8f757f-72f7-4238-8271-fe61c3cbe819", 00:09:48.697 "is_configured": true, 00:09:48.697 "data_offset": 0, 00:09:48.697 "data_size": 65536 00:09:48.697 }, 00:09:48.697 { 00:09:48.697 "name": "BaseBdev3", 00:09:48.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.697 "is_configured": false, 00:09:48.697 "data_offset": 0, 00:09:48.697 "data_size": 0 00:09:48.697 }, 00:09:48.697 { 00:09:48.697 "name": "BaseBdev4", 00:09:48.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.697 "is_configured": false, 00:09:48.697 "data_offset": 0, 00:09:48.697 "data_size": 0 00:09:48.697 } 00:09:48.697 ] 00:09:48.697 }' 00:09:48.697 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.697 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.956 [2024-11-20 03:16:38.513000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.956 BaseBdev3 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.956 [ 00:09:48.956 { 00:09:48.956 "name": "BaseBdev3", 00:09:48.956 "aliases": [ 00:09:48.956 "7a2dbaae-2f86-470c-922d-aebd197c5cab" 00:09:48.956 ], 00:09:48.956 "product_name": "Malloc disk", 00:09:48.956 "block_size": 512, 00:09:48.956 "num_blocks": 65536, 00:09:48.956 "uuid": "7a2dbaae-2f86-470c-922d-aebd197c5cab", 00:09:48.956 "assigned_rate_limits": { 00:09:48.956 "rw_ios_per_sec": 0, 00:09:48.956 "rw_mbytes_per_sec": 0, 00:09:48.956 "r_mbytes_per_sec": 0, 00:09:48.956 "w_mbytes_per_sec": 0 00:09:48.956 }, 00:09:48.956 "claimed": true, 00:09:48.956 "claim_type": "exclusive_write", 00:09:48.956 "zoned": false, 00:09:48.956 "supported_io_types": { 00:09:48.956 "read": true, 00:09:48.956 "write": true, 00:09:48.956 "unmap": true, 00:09:48.956 "flush": true, 00:09:48.956 "reset": true, 00:09:48.956 "nvme_admin": false, 00:09:48.956 "nvme_io": false, 00:09:48.956 "nvme_io_md": false, 00:09:48.956 "write_zeroes": true, 00:09:48.956 "zcopy": true, 00:09:48.956 "get_zone_info": false, 00:09:48.956 "zone_management": false, 00:09:48.956 "zone_append": false, 00:09:48.956 "compare": false, 00:09:48.956 "compare_and_write": false, 00:09:48.956 "abort": true, 00:09:48.956 "seek_hole": false, 00:09:48.956 "seek_data": false, 00:09:48.956 "copy": true, 00:09:48.956 "nvme_iov_md": false 00:09:48.956 }, 00:09:48.956 "memory_domains": [ 00:09:48.956 { 00:09:48.956 "dma_device_id": "system", 00:09:48.956 "dma_device_type": 1 00:09:48.956 }, 00:09:48.956 { 00:09:48.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.956 "dma_device_type": 2 00:09:48.956 } 00:09:48.956 ], 00:09:48.956 "driver_specific": {} 00:09:48.956 } 00:09:48.956 ] 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.956 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.215 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.215 "name": "Existed_Raid", 00:09:49.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.215 "strip_size_kb": 64, 00:09:49.215 "state": "configuring", 00:09:49.215 "raid_level": "raid0", 00:09:49.215 "superblock": false, 00:09:49.215 "num_base_bdevs": 4, 00:09:49.215 "num_base_bdevs_discovered": 3, 00:09:49.215 "num_base_bdevs_operational": 4, 00:09:49.215 "base_bdevs_list": [ 00:09:49.215 { 00:09:49.215 "name": "BaseBdev1", 00:09:49.215 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:49.215 "is_configured": true, 00:09:49.215 "data_offset": 0, 00:09:49.215 "data_size": 65536 00:09:49.215 }, 00:09:49.215 { 00:09:49.215 "name": "BaseBdev2", 00:09:49.215 "uuid": "6c8f757f-72f7-4238-8271-fe61c3cbe819", 00:09:49.215 "is_configured": true, 00:09:49.215 "data_offset": 0, 00:09:49.215 "data_size": 65536 00:09:49.215 }, 00:09:49.215 { 00:09:49.215 "name": "BaseBdev3", 00:09:49.215 "uuid": "7a2dbaae-2f86-470c-922d-aebd197c5cab", 00:09:49.215 "is_configured": true, 00:09:49.215 "data_offset": 0, 00:09:49.215 "data_size": 65536 00:09:49.215 }, 00:09:49.215 { 00:09:49.215 "name": "BaseBdev4", 00:09:49.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.215 "is_configured": false, 00:09:49.215 "data_offset": 0, 00:09:49.215 "data_size": 0 00:09:49.215 } 00:09:49.215 ] 00:09:49.215 }' 00:09:49.215 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.215 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.474 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:49.474 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.474 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.474 [2024-11-20 03:16:38.986448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.474 [2024-11-20 03:16:38.986597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.474 [2024-11-20 03:16:38.986642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:49.474 [2024-11-20 03:16:38.986956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.474 [2024-11-20 03:16:38.987165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.474 [2024-11-20 03:16:38.987215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:49.474 [2024-11-20 03:16:38.987569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.474 BaseBdev4 00:09:49.474 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.474 03:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:49.474 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.475 03:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.475 [ 00:09:49.475 { 00:09:49.475 "name": "BaseBdev4", 00:09:49.475 "aliases": [ 00:09:49.475 "bf4fb80e-a1bb-4792-a216-357c243561ee" 00:09:49.475 ], 00:09:49.475 "product_name": "Malloc disk", 00:09:49.475 "block_size": 512, 00:09:49.475 "num_blocks": 65536, 00:09:49.475 "uuid": "bf4fb80e-a1bb-4792-a216-357c243561ee", 00:09:49.475 "assigned_rate_limits": { 00:09:49.475 "rw_ios_per_sec": 0, 00:09:49.475 "rw_mbytes_per_sec": 0, 00:09:49.475 "r_mbytes_per_sec": 0, 00:09:49.475 "w_mbytes_per_sec": 0 00:09:49.475 }, 00:09:49.475 "claimed": true, 00:09:49.475 "claim_type": "exclusive_write", 00:09:49.475 "zoned": false, 00:09:49.475 "supported_io_types": { 00:09:49.475 "read": true, 00:09:49.475 "write": true, 00:09:49.475 "unmap": true, 00:09:49.475 "flush": true, 00:09:49.475 "reset": true, 00:09:49.475 "nvme_admin": false, 00:09:49.475 "nvme_io": false, 00:09:49.475 "nvme_io_md": false, 00:09:49.475 "write_zeroes": true, 00:09:49.475 "zcopy": true, 00:09:49.475 "get_zone_info": false, 00:09:49.475 "zone_management": false, 00:09:49.475 "zone_append": false, 00:09:49.475 "compare": false, 00:09:49.475 "compare_and_write": false, 00:09:49.475 "abort": true, 00:09:49.475 "seek_hole": false, 00:09:49.475 "seek_data": false, 00:09:49.475 "copy": true, 00:09:49.475 "nvme_iov_md": false 00:09:49.475 }, 00:09:49.475 "memory_domains": [ 00:09:49.475 { 00:09:49.475 "dma_device_id": "system", 00:09:49.475 "dma_device_type": 1 00:09:49.475 }, 00:09:49.475 { 00:09:49.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.475 "dma_device_type": 2 00:09:49.475 } 00:09:49.475 ], 00:09:49.475 "driver_specific": {} 00:09:49.475 } 00:09:49.475 ] 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.475 "name": "Existed_Raid", 00:09:49.475 "uuid": "40710387-cf5f-4293-a601-fc0884e82266", 00:09:49.475 "strip_size_kb": 64, 00:09:49.475 "state": "online", 00:09:49.475 "raid_level": "raid0", 00:09:49.475 "superblock": false, 00:09:49.475 "num_base_bdevs": 4, 00:09:49.475 "num_base_bdevs_discovered": 4, 00:09:49.475 "num_base_bdevs_operational": 4, 00:09:49.475 "base_bdevs_list": [ 00:09:49.475 { 00:09:49.475 "name": "BaseBdev1", 00:09:49.475 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:49.475 "is_configured": true, 00:09:49.475 "data_offset": 0, 00:09:49.475 "data_size": 65536 00:09:49.475 }, 00:09:49.475 { 00:09:49.475 "name": "BaseBdev2", 00:09:49.475 "uuid": "6c8f757f-72f7-4238-8271-fe61c3cbe819", 00:09:49.475 "is_configured": true, 00:09:49.475 "data_offset": 0, 00:09:49.475 "data_size": 65536 00:09:49.475 }, 00:09:49.475 { 00:09:49.475 "name": "BaseBdev3", 00:09:49.475 "uuid": "7a2dbaae-2f86-470c-922d-aebd197c5cab", 00:09:49.475 "is_configured": true, 00:09:49.475 "data_offset": 0, 00:09:49.475 "data_size": 65536 00:09:49.475 }, 00:09:49.475 { 00:09:49.475 "name": "BaseBdev4", 00:09:49.475 "uuid": "bf4fb80e-a1bb-4792-a216-357c243561ee", 00:09:49.475 "is_configured": true, 00:09:49.475 "data_offset": 0, 00:09:49.475 "data_size": 65536 00:09:49.475 } 00:09:49.475 ] 00:09:49.475 }' 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.475 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.052 [2024-11-20 03:16:39.505980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.052 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.052 "name": "Existed_Raid", 00:09:50.052 "aliases": [ 00:09:50.052 "40710387-cf5f-4293-a601-fc0884e82266" 00:09:50.052 ], 00:09:50.052 "product_name": "Raid Volume", 00:09:50.052 "block_size": 512, 00:09:50.052 "num_blocks": 262144, 00:09:50.052 "uuid": "40710387-cf5f-4293-a601-fc0884e82266", 00:09:50.052 "assigned_rate_limits": { 00:09:50.052 "rw_ios_per_sec": 0, 00:09:50.052 "rw_mbytes_per_sec": 0, 00:09:50.052 "r_mbytes_per_sec": 0, 00:09:50.052 "w_mbytes_per_sec": 0 00:09:50.052 }, 00:09:50.052 "claimed": false, 00:09:50.052 "zoned": false, 00:09:50.052 "supported_io_types": { 00:09:50.052 "read": true, 00:09:50.052 "write": true, 00:09:50.052 "unmap": true, 00:09:50.052 "flush": true, 00:09:50.052 "reset": true, 00:09:50.052 "nvme_admin": false, 00:09:50.052 "nvme_io": false, 00:09:50.052 "nvme_io_md": false, 00:09:50.052 "write_zeroes": true, 00:09:50.052 "zcopy": false, 00:09:50.052 "get_zone_info": false, 00:09:50.052 "zone_management": false, 00:09:50.052 "zone_append": false, 00:09:50.052 "compare": false, 00:09:50.052 "compare_and_write": false, 00:09:50.052 "abort": false, 00:09:50.052 "seek_hole": false, 00:09:50.052 "seek_data": false, 00:09:50.052 "copy": false, 00:09:50.052 "nvme_iov_md": false 00:09:50.052 }, 00:09:50.052 "memory_domains": [ 00:09:50.052 { 00:09:50.052 "dma_device_id": "system", 00:09:50.053 "dma_device_type": 1 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.053 "dma_device_type": 2 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "system", 00:09:50.053 "dma_device_type": 1 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.053 "dma_device_type": 2 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "system", 00:09:50.053 "dma_device_type": 1 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.053 "dma_device_type": 2 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "system", 00:09:50.053 "dma_device_type": 1 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.053 "dma_device_type": 2 00:09:50.053 } 00:09:50.053 ], 00:09:50.053 "driver_specific": { 00:09:50.053 "raid": { 00:09:50.053 "uuid": "40710387-cf5f-4293-a601-fc0884e82266", 00:09:50.053 "strip_size_kb": 64, 00:09:50.053 "state": "online", 00:09:50.053 "raid_level": "raid0", 00:09:50.053 "superblock": false, 00:09:50.053 "num_base_bdevs": 4, 00:09:50.053 "num_base_bdevs_discovered": 4, 00:09:50.053 "num_base_bdevs_operational": 4, 00:09:50.053 "base_bdevs_list": [ 00:09:50.053 { 00:09:50.053 "name": "BaseBdev1", 00:09:50.053 "uuid": "76cf2b53-b8ec-4d8a-a648-31228d617fa6", 00:09:50.053 "is_configured": true, 00:09:50.053 "data_offset": 0, 00:09:50.053 "data_size": 65536 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "name": "BaseBdev2", 00:09:50.053 "uuid": "6c8f757f-72f7-4238-8271-fe61c3cbe819", 00:09:50.053 "is_configured": true, 00:09:50.053 "data_offset": 0, 00:09:50.053 "data_size": 65536 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "name": "BaseBdev3", 00:09:50.053 "uuid": "7a2dbaae-2f86-470c-922d-aebd197c5cab", 00:09:50.053 "is_configured": true, 00:09:50.053 "data_offset": 0, 00:09:50.053 "data_size": 65536 00:09:50.053 }, 00:09:50.053 { 00:09:50.053 "name": "BaseBdev4", 00:09:50.053 "uuid": "bf4fb80e-a1bb-4792-a216-357c243561ee", 00:09:50.053 "is_configured": true, 00:09:50.053 "data_offset": 0, 00:09:50.053 "data_size": 65536 00:09:50.053 } 00:09:50.053 ] 00:09:50.053 } 00:09:50.053 } 00:09:50.053 }' 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.053 BaseBdev2 00:09:50.053 BaseBdev3 00:09:50.053 BaseBdev4' 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.053 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.313 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.313 [2024-11-20 03:16:39.853082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.313 [2024-11-20 03:16:39.853115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.313 [2024-11-20 03:16:39.853168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.573 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.573 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:50.573 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.574 03:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.574 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.574 "name": "Existed_Raid", 00:09:50.574 "uuid": "40710387-cf5f-4293-a601-fc0884e82266", 00:09:50.574 "strip_size_kb": 64, 00:09:50.574 "state": "offline", 00:09:50.574 "raid_level": "raid0", 00:09:50.574 "superblock": false, 00:09:50.574 "num_base_bdevs": 4, 00:09:50.574 "num_base_bdevs_discovered": 3, 00:09:50.574 "num_base_bdevs_operational": 3, 00:09:50.574 "base_bdevs_list": [ 00:09:50.574 { 00:09:50.574 "name": null, 00:09:50.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.574 "is_configured": false, 00:09:50.574 "data_offset": 0, 00:09:50.574 "data_size": 65536 00:09:50.574 }, 00:09:50.574 { 00:09:50.574 "name": "BaseBdev2", 00:09:50.574 "uuid": "6c8f757f-72f7-4238-8271-fe61c3cbe819", 00:09:50.574 "is_configured": true, 00:09:50.574 "data_offset": 0, 00:09:50.574 "data_size": 65536 00:09:50.574 }, 00:09:50.574 { 00:09:50.574 "name": "BaseBdev3", 00:09:50.574 "uuid": "7a2dbaae-2f86-470c-922d-aebd197c5cab", 00:09:50.574 "is_configured": true, 00:09:50.574 "data_offset": 0, 00:09:50.574 "data_size": 65536 00:09:50.574 }, 00:09:50.574 { 00:09:50.574 "name": "BaseBdev4", 00:09:50.574 "uuid": "bf4fb80e-a1bb-4792-a216-357c243561ee", 00:09:50.574 "is_configured": true, 00:09:50.574 "data_offset": 0, 00:09:50.574 "data_size": 65536 00:09:50.574 } 00:09:50.574 ] 00:09:50.574 }' 00:09:50.574 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.574 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.834 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.834 [2024-11-20 03:16:40.440686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.094 [2024-11-20 03:16:40.593369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.094 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.354 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.355 [2024-11-20 03:16:40.747636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:51.355 [2024-11-20 03:16:40.747749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.355 BaseBdev2 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.355 [ 00:09:51.355 { 00:09:51.355 "name": "BaseBdev2", 00:09:51.355 "aliases": [ 00:09:51.355 "e6ee2280-85a2-45da-abe5-6b0f706d639d" 00:09:51.355 ], 00:09:51.355 "product_name": "Malloc disk", 00:09:51.355 "block_size": 512, 00:09:51.355 "num_blocks": 65536, 00:09:51.355 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:51.355 "assigned_rate_limits": { 00:09:51.355 "rw_ios_per_sec": 0, 00:09:51.355 "rw_mbytes_per_sec": 0, 00:09:51.355 "r_mbytes_per_sec": 0, 00:09:51.355 "w_mbytes_per_sec": 0 00:09:51.355 }, 00:09:51.355 "claimed": false, 00:09:51.355 "zoned": false, 00:09:51.355 "supported_io_types": { 00:09:51.355 "read": true, 00:09:51.355 "write": true, 00:09:51.355 "unmap": true, 00:09:51.355 "flush": true, 00:09:51.355 "reset": true, 00:09:51.355 "nvme_admin": false, 00:09:51.355 "nvme_io": false, 00:09:51.355 "nvme_io_md": false, 00:09:51.355 "write_zeroes": true, 00:09:51.355 "zcopy": true, 00:09:51.355 "get_zone_info": false, 00:09:51.355 "zone_management": false, 00:09:51.355 "zone_append": false, 00:09:51.355 "compare": false, 00:09:51.355 "compare_and_write": false, 00:09:51.355 "abort": true, 00:09:51.355 "seek_hole": false, 00:09:51.355 "seek_data": false, 00:09:51.355 "copy": true, 00:09:51.355 "nvme_iov_md": false 00:09:51.355 }, 00:09:51.355 "memory_domains": [ 00:09:51.355 { 00:09:51.355 "dma_device_id": "system", 00:09:51.355 "dma_device_type": 1 00:09:51.355 }, 00:09:51.355 { 00:09:51.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.355 "dma_device_type": 2 00:09:51.355 } 00:09:51.355 ], 00:09:51.355 "driver_specific": {} 00:09:51.355 } 00:09:51.355 ] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.355 03:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.616 BaseBdev3 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.616 [ 00:09:51.616 { 00:09:51.616 "name": "BaseBdev3", 00:09:51.616 "aliases": [ 00:09:51.616 "4f65a973-5faf-4461-854e-4c9fc9bbbe07" 00:09:51.616 ], 00:09:51.616 "product_name": "Malloc disk", 00:09:51.616 "block_size": 512, 00:09:51.616 "num_blocks": 65536, 00:09:51.616 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:51.616 "assigned_rate_limits": { 00:09:51.616 "rw_ios_per_sec": 0, 00:09:51.616 "rw_mbytes_per_sec": 0, 00:09:51.616 "r_mbytes_per_sec": 0, 00:09:51.616 "w_mbytes_per_sec": 0 00:09:51.616 }, 00:09:51.616 "claimed": false, 00:09:51.616 "zoned": false, 00:09:51.616 "supported_io_types": { 00:09:51.616 "read": true, 00:09:51.616 "write": true, 00:09:51.616 "unmap": true, 00:09:51.616 "flush": true, 00:09:51.616 "reset": true, 00:09:51.616 "nvme_admin": false, 00:09:51.616 "nvme_io": false, 00:09:51.616 "nvme_io_md": false, 00:09:51.616 "write_zeroes": true, 00:09:51.616 "zcopy": true, 00:09:51.616 "get_zone_info": false, 00:09:51.616 "zone_management": false, 00:09:51.616 "zone_append": false, 00:09:51.616 "compare": false, 00:09:51.616 "compare_and_write": false, 00:09:51.616 "abort": true, 00:09:51.616 "seek_hole": false, 00:09:51.616 "seek_data": false, 00:09:51.616 "copy": true, 00:09:51.616 "nvme_iov_md": false 00:09:51.616 }, 00:09:51.616 "memory_domains": [ 00:09:51.616 { 00:09:51.616 "dma_device_id": "system", 00:09:51.616 "dma_device_type": 1 00:09:51.616 }, 00:09:51.616 { 00:09:51.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.616 "dma_device_type": 2 00:09:51.616 } 00:09:51.616 ], 00:09:51.616 "driver_specific": {} 00:09:51.616 } 00:09:51.616 ] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.616 BaseBdev4 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.616 [ 00:09:51.616 { 00:09:51.616 "name": "BaseBdev4", 00:09:51.616 "aliases": [ 00:09:51.616 "47ad2dd8-4126-4562-b1b1-3ba7146d7045" 00:09:51.616 ], 00:09:51.616 "product_name": "Malloc disk", 00:09:51.616 "block_size": 512, 00:09:51.616 "num_blocks": 65536, 00:09:51.616 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:51.616 "assigned_rate_limits": { 00:09:51.616 "rw_ios_per_sec": 0, 00:09:51.616 "rw_mbytes_per_sec": 0, 00:09:51.616 "r_mbytes_per_sec": 0, 00:09:51.616 "w_mbytes_per_sec": 0 00:09:51.616 }, 00:09:51.616 "claimed": false, 00:09:51.616 "zoned": false, 00:09:51.616 "supported_io_types": { 00:09:51.616 "read": true, 00:09:51.616 "write": true, 00:09:51.616 "unmap": true, 00:09:51.616 "flush": true, 00:09:51.616 "reset": true, 00:09:51.616 "nvme_admin": false, 00:09:51.616 "nvme_io": false, 00:09:51.616 "nvme_io_md": false, 00:09:51.616 "write_zeroes": true, 00:09:51.616 "zcopy": true, 00:09:51.616 "get_zone_info": false, 00:09:51.616 "zone_management": false, 00:09:51.616 "zone_append": false, 00:09:51.616 "compare": false, 00:09:51.616 "compare_and_write": false, 00:09:51.616 "abort": true, 00:09:51.616 "seek_hole": false, 00:09:51.616 "seek_data": false, 00:09:51.616 "copy": true, 00:09:51.616 "nvme_iov_md": false 00:09:51.616 }, 00:09:51.616 "memory_domains": [ 00:09:51.616 { 00:09:51.616 "dma_device_id": "system", 00:09:51.616 "dma_device_type": 1 00:09:51.616 }, 00:09:51.616 { 00:09:51.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.616 "dma_device_type": 2 00:09:51.616 } 00:09:51.616 ], 00:09:51.616 "driver_specific": {} 00:09:51.616 } 00:09:51.616 ] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.616 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.617 [2024-11-20 03:16:41.149442] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.617 [2024-11-20 03:16:41.149536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.617 [2024-11-20 03:16:41.149565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.617 [2024-11-20 03:16:41.151438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.617 [2024-11-20 03:16:41.151492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.617 "name": "Existed_Raid", 00:09:51.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.617 "strip_size_kb": 64, 00:09:51.617 "state": "configuring", 00:09:51.617 "raid_level": "raid0", 00:09:51.617 "superblock": false, 00:09:51.617 "num_base_bdevs": 4, 00:09:51.617 "num_base_bdevs_discovered": 3, 00:09:51.617 "num_base_bdevs_operational": 4, 00:09:51.617 "base_bdevs_list": [ 00:09:51.617 { 00:09:51.617 "name": "BaseBdev1", 00:09:51.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.617 "is_configured": false, 00:09:51.617 "data_offset": 0, 00:09:51.617 "data_size": 0 00:09:51.617 }, 00:09:51.617 { 00:09:51.617 "name": "BaseBdev2", 00:09:51.617 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 0, 00:09:51.617 "data_size": 65536 00:09:51.617 }, 00:09:51.617 { 00:09:51.617 "name": "BaseBdev3", 00:09:51.617 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 0, 00:09:51.617 "data_size": 65536 00:09:51.617 }, 00:09:51.617 { 00:09:51.617 "name": "BaseBdev4", 00:09:51.617 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 0, 00:09:51.617 "data_size": 65536 00:09:51.617 } 00:09:51.617 ] 00:09:51.617 }' 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.617 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 [2024-11-20 03:16:41.584751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.187 "name": "Existed_Raid", 00:09:52.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.187 "strip_size_kb": 64, 00:09:52.187 "state": "configuring", 00:09:52.187 "raid_level": "raid0", 00:09:52.187 "superblock": false, 00:09:52.187 "num_base_bdevs": 4, 00:09:52.187 "num_base_bdevs_discovered": 2, 00:09:52.187 "num_base_bdevs_operational": 4, 00:09:52.187 "base_bdevs_list": [ 00:09:52.187 { 00:09:52.187 "name": "BaseBdev1", 00:09:52.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.187 "is_configured": false, 00:09:52.187 "data_offset": 0, 00:09:52.187 "data_size": 0 00:09:52.187 }, 00:09:52.187 { 00:09:52.187 "name": null, 00:09:52.187 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:52.187 "is_configured": false, 00:09:52.187 "data_offset": 0, 00:09:52.187 "data_size": 65536 00:09:52.187 }, 00:09:52.187 { 00:09:52.187 "name": "BaseBdev3", 00:09:52.187 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:52.187 "is_configured": true, 00:09:52.187 "data_offset": 0, 00:09:52.187 "data_size": 65536 00:09:52.187 }, 00:09:52.187 { 00:09:52.187 "name": "BaseBdev4", 00:09:52.187 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:52.187 "is_configured": true, 00:09:52.187 "data_offset": 0, 00:09:52.187 "data_size": 65536 00:09:52.187 } 00:09:52.187 ] 00:09:52.187 }' 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.187 03:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.446 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.447 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.447 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.447 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.447 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 [2024-11-20 03:16:42.135484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.706 BaseBdev1 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:52.706 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 [ 00:09:52.707 { 00:09:52.707 "name": "BaseBdev1", 00:09:52.707 "aliases": [ 00:09:52.707 "ddb0c6ff-d2be-4398-9182-ff0e943f9778" 00:09:52.707 ], 00:09:52.707 "product_name": "Malloc disk", 00:09:52.707 "block_size": 512, 00:09:52.707 "num_blocks": 65536, 00:09:52.707 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:52.707 "assigned_rate_limits": { 00:09:52.707 "rw_ios_per_sec": 0, 00:09:52.707 "rw_mbytes_per_sec": 0, 00:09:52.707 "r_mbytes_per_sec": 0, 00:09:52.707 "w_mbytes_per_sec": 0 00:09:52.707 }, 00:09:52.707 "claimed": true, 00:09:52.707 "claim_type": "exclusive_write", 00:09:52.707 "zoned": false, 00:09:52.707 "supported_io_types": { 00:09:52.707 "read": true, 00:09:52.707 "write": true, 00:09:52.707 "unmap": true, 00:09:52.707 "flush": true, 00:09:52.707 "reset": true, 00:09:52.707 "nvme_admin": false, 00:09:52.707 "nvme_io": false, 00:09:52.707 "nvme_io_md": false, 00:09:52.707 "write_zeroes": true, 00:09:52.707 "zcopy": true, 00:09:52.707 "get_zone_info": false, 00:09:52.707 "zone_management": false, 00:09:52.707 "zone_append": false, 00:09:52.707 "compare": false, 00:09:52.707 "compare_and_write": false, 00:09:52.707 "abort": true, 00:09:52.707 "seek_hole": false, 00:09:52.707 "seek_data": false, 00:09:52.707 "copy": true, 00:09:52.707 "nvme_iov_md": false 00:09:52.707 }, 00:09:52.707 "memory_domains": [ 00:09:52.707 { 00:09:52.707 "dma_device_id": "system", 00:09:52.707 "dma_device_type": 1 00:09:52.707 }, 00:09:52.707 { 00:09:52.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.707 "dma_device_type": 2 00:09:52.707 } 00:09:52.707 ], 00:09:52.707 "driver_specific": {} 00:09:52.707 } 00:09:52.707 ] 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.707 "name": "Existed_Raid", 00:09:52.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.707 "strip_size_kb": 64, 00:09:52.707 "state": "configuring", 00:09:52.707 "raid_level": "raid0", 00:09:52.707 "superblock": false, 00:09:52.707 "num_base_bdevs": 4, 00:09:52.707 "num_base_bdevs_discovered": 3, 00:09:52.707 "num_base_bdevs_operational": 4, 00:09:52.707 "base_bdevs_list": [ 00:09:52.707 { 00:09:52.707 "name": "BaseBdev1", 00:09:52.707 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:52.707 "is_configured": true, 00:09:52.707 "data_offset": 0, 00:09:52.707 "data_size": 65536 00:09:52.707 }, 00:09:52.707 { 00:09:52.707 "name": null, 00:09:52.707 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:52.707 "is_configured": false, 00:09:52.707 "data_offset": 0, 00:09:52.707 "data_size": 65536 00:09:52.707 }, 00:09:52.707 { 00:09:52.707 "name": "BaseBdev3", 00:09:52.707 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:52.707 "is_configured": true, 00:09:52.707 "data_offset": 0, 00:09:52.707 "data_size": 65536 00:09:52.707 }, 00:09:52.707 { 00:09:52.707 "name": "BaseBdev4", 00:09:52.707 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:52.707 "is_configured": true, 00:09:52.707 "data_offset": 0, 00:09:52.707 "data_size": 65536 00:09:52.707 } 00:09:52.707 ] 00:09:52.707 }' 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.707 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.083 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.084 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.084 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.084 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.346 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:53.346 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:53.346 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.346 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.346 [2024-11-20 03:16:42.726645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.347 "name": "Existed_Raid", 00:09:53.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.347 "strip_size_kb": 64, 00:09:53.347 "state": "configuring", 00:09:53.347 "raid_level": "raid0", 00:09:53.347 "superblock": false, 00:09:53.347 "num_base_bdevs": 4, 00:09:53.347 "num_base_bdevs_discovered": 2, 00:09:53.347 "num_base_bdevs_operational": 4, 00:09:53.347 "base_bdevs_list": [ 00:09:53.347 { 00:09:53.347 "name": "BaseBdev1", 00:09:53.347 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:53.347 "is_configured": true, 00:09:53.347 "data_offset": 0, 00:09:53.347 "data_size": 65536 00:09:53.347 }, 00:09:53.347 { 00:09:53.347 "name": null, 00:09:53.347 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:53.347 "is_configured": false, 00:09:53.347 "data_offset": 0, 00:09:53.347 "data_size": 65536 00:09:53.347 }, 00:09:53.347 { 00:09:53.347 "name": null, 00:09:53.347 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:53.347 "is_configured": false, 00:09:53.347 "data_offset": 0, 00:09:53.347 "data_size": 65536 00:09:53.347 }, 00:09:53.347 { 00:09:53.347 "name": "BaseBdev4", 00:09:53.347 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:53.347 "is_configured": true, 00:09:53.347 "data_offset": 0, 00:09:53.347 "data_size": 65536 00:09:53.347 } 00:09:53.347 ] 00:09:53.347 }' 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.347 03:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.606 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.606 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.606 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.606 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.606 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.866 [2024-11-20 03:16:43.249709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.866 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.866 "name": "Existed_Raid", 00:09:53.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.866 "strip_size_kb": 64, 00:09:53.866 "state": "configuring", 00:09:53.867 "raid_level": "raid0", 00:09:53.867 "superblock": false, 00:09:53.867 "num_base_bdevs": 4, 00:09:53.867 "num_base_bdevs_discovered": 3, 00:09:53.867 "num_base_bdevs_operational": 4, 00:09:53.867 "base_bdevs_list": [ 00:09:53.867 { 00:09:53.867 "name": "BaseBdev1", 00:09:53.867 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:53.867 "is_configured": true, 00:09:53.867 "data_offset": 0, 00:09:53.867 "data_size": 65536 00:09:53.867 }, 00:09:53.867 { 00:09:53.867 "name": null, 00:09:53.867 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:53.867 "is_configured": false, 00:09:53.867 "data_offset": 0, 00:09:53.867 "data_size": 65536 00:09:53.867 }, 00:09:53.867 { 00:09:53.867 "name": "BaseBdev3", 00:09:53.867 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:53.867 "is_configured": true, 00:09:53.867 "data_offset": 0, 00:09:53.867 "data_size": 65536 00:09:53.867 }, 00:09:53.867 { 00:09:53.867 "name": "BaseBdev4", 00:09:53.867 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:53.867 "is_configured": true, 00:09:53.867 "data_offset": 0, 00:09:53.867 "data_size": 65536 00:09:53.867 } 00:09:53.867 ] 00:09:53.867 }' 00:09:53.867 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.867 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.127 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.127 [2024-11-20 03:16:43.684960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.388 "name": "Existed_Raid", 00:09:54.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.388 "strip_size_kb": 64, 00:09:54.388 "state": "configuring", 00:09:54.388 "raid_level": "raid0", 00:09:54.388 "superblock": false, 00:09:54.388 "num_base_bdevs": 4, 00:09:54.388 "num_base_bdevs_discovered": 2, 00:09:54.388 "num_base_bdevs_operational": 4, 00:09:54.388 "base_bdevs_list": [ 00:09:54.388 { 00:09:54.388 "name": null, 00:09:54.388 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:54.388 "is_configured": false, 00:09:54.388 "data_offset": 0, 00:09:54.388 "data_size": 65536 00:09:54.388 }, 00:09:54.388 { 00:09:54.388 "name": null, 00:09:54.388 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:54.388 "is_configured": false, 00:09:54.388 "data_offset": 0, 00:09:54.388 "data_size": 65536 00:09:54.388 }, 00:09:54.388 { 00:09:54.388 "name": "BaseBdev3", 00:09:54.388 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:54.388 "is_configured": true, 00:09:54.388 "data_offset": 0, 00:09:54.388 "data_size": 65536 00:09:54.388 }, 00:09:54.388 { 00:09:54.388 "name": "BaseBdev4", 00:09:54.388 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:54.388 "is_configured": true, 00:09:54.388 "data_offset": 0, 00:09:54.388 "data_size": 65536 00:09:54.388 } 00:09:54.388 ] 00:09:54.388 }' 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.388 03:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.648 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.648 [2024-11-20 03:16:44.280986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.909 "name": "Existed_Raid", 00:09:54.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.909 "strip_size_kb": 64, 00:09:54.909 "state": "configuring", 00:09:54.909 "raid_level": "raid0", 00:09:54.909 "superblock": false, 00:09:54.909 "num_base_bdevs": 4, 00:09:54.909 "num_base_bdevs_discovered": 3, 00:09:54.909 "num_base_bdevs_operational": 4, 00:09:54.909 "base_bdevs_list": [ 00:09:54.909 { 00:09:54.909 "name": null, 00:09:54.909 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:54.909 "is_configured": false, 00:09:54.909 "data_offset": 0, 00:09:54.909 "data_size": 65536 00:09:54.909 }, 00:09:54.909 { 00:09:54.909 "name": "BaseBdev2", 00:09:54.909 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:54.909 "is_configured": true, 00:09:54.909 "data_offset": 0, 00:09:54.909 "data_size": 65536 00:09:54.909 }, 00:09:54.909 { 00:09:54.909 "name": "BaseBdev3", 00:09:54.909 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:54.909 "is_configured": true, 00:09:54.909 "data_offset": 0, 00:09:54.909 "data_size": 65536 00:09:54.909 }, 00:09:54.909 { 00:09:54.909 "name": "BaseBdev4", 00:09:54.909 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:54.909 "is_configured": true, 00:09:54.909 "data_offset": 0, 00:09:54.909 "data_size": 65536 00:09:54.909 } 00:09:54.909 ] 00:09:54.909 }' 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.909 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.168 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.168 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.169 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.428 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.428 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ddb0c6ff-d2be-4398-9182-ff0e943f9778 00:09:55.428 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.429 [2024-11-20 03:16:44.877707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:55.429 [2024-11-20 03:16:44.877765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.429 [2024-11-20 03:16:44.877773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:55.429 [2024-11-20 03:16:44.878033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:55.429 [2024-11-20 03:16:44.878186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.429 [2024-11-20 03:16:44.878198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:55.429 [2024-11-20 03:16:44.878452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.429 NewBaseBdev 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.429 [ 00:09:55.429 { 00:09:55.429 "name": "NewBaseBdev", 00:09:55.429 "aliases": [ 00:09:55.429 "ddb0c6ff-d2be-4398-9182-ff0e943f9778" 00:09:55.429 ], 00:09:55.429 "product_name": "Malloc disk", 00:09:55.429 "block_size": 512, 00:09:55.429 "num_blocks": 65536, 00:09:55.429 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:55.429 "assigned_rate_limits": { 00:09:55.429 "rw_ios_per_sec": 0, 00:09:55.429 "rw_mbytes_per_sec": 0, 00:09:55.429 "r_mbytes_per_sec": 0, 00:09:55.429 "w_mbytes_per_sec": 0 00:09:55.429 }, 00:09:55.429 "claimed": true, 00:09:55.429 "claim_type": "exclusive_write", 00:09:55.429 "zoned": false, 00:09:55.429 "supported_io_types": { 00:09:55.429 "read": true, 00:09:55.429 "write": true, 00:09:55.429 "unmap": true, 00:09:55.429 "flush": true, 00:09:55.429 "reset": true, 00:09:55.429 "nvme_admin": false, 00:09:55.429 "nvme_io": false, 00:09:55.429 "nvme_io_md": false, 00:09:55.429 "write_zeroes": true, 00:09:55.429 "zcopy": true, 00:09:55.429 "get_zone_info": false, 00:09:55.429 "zone_management": false, 00:09:55.429 "zone_append": false, 00:09:55.429 "compare": false, 00:09:55.429 "compare_and_write": false, 00:09:55.429 "abort": true, 00:09:55.429 "seek_hole": false, 00:09:55.429 "seek_data": false, 00:09:55.429 "copy": true, 00:09:55.429 "nvme_iov_md": false 00:09:55.429 }, 00:09:55.429 "memory_domains": [ 00:09:55.429 { 00:09:55.429 "dma_device_id": "system", 00:09:55.429 "dma_device_type": 1 00:09:55.429 }, 00:09:55.429 { 00:09:55.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.429 "dma_device_type": 2 00:09:55.429 } 00:09:55.429 ], 00:09:55.429 "driver_specific": {} 00:09:55.429 } 00:09:55.429 ] 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.429 "name": "Existed_Raid", 00:09:55.429 "uuid": "109b3fe8-bc9b-4e1a-9ca5-f83aa740d7bf", 00:09:55.429 "strip_size_kb": 64, 00:09:55.429 "state": "online", 00:09:55.429 "raid_level": "raid0", 00:09:55.429 "superblock": false, 00:09:55.429 "num_base_bdevs": 4, 00:09:55.429 "num_base_bdevs_discovered": 4, 00:09:55.429 "num_base_bdevs_operational": 4, 00:09:55.429 "base_bdevs_list": [ 00:09:55.429 { 00:09:55.429 "name": "NewBaseBdev", 00:09:55.429 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:55.429 "is_configured": true, 00:09:55.429 "data_offset": 0, 00:09:55.429 "data_size": 65536 00:09:55.429 }, 00:09:55.429 { 00:09:55.429 "name": "BaseBdev2", 00:09:55.429 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:55.429 "is_configured": true, 00:09:55.429 "data_offset": 0, 00:09:55.429 "data_size": 65536 00:09:55.429 }, 00:09:55.429 { 00:09:55.429 "name": "BaseBdev3", 00:09:55.429 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:55.429 "is_configured": true, 00:09:55.429 "data_offset": 0, 00:09:55.429 "data_size": 65536 00:09:55.429 }, 00:09:55.429 { 00:09:55.429 "name": "BaseBdev4", 00:09:55.429 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:55.429 "is_configured": true, 00:09:55.429 "data_offset": 0, 00:09:55.429 "data_size": 65536 00:09:55.429 } 00:09:55.429 ] 00:09:55.429 }' 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.429 03:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.000 [2024-11-20 03:16:45.437226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.000 "name": "Existed_Raid", 00:09:56.000 "aliases": [ 00:09:56.000 "109b3fe8-bc9b-4e1a-9ca5-f83aa740d7bf" 00:09:56.000 ], 00:09:56.000 "product_name": "Raid Volume", 00:09:56.000 "block_size": 512, 00:09:56.000 "num_blocks": 262144, 00:09:56.000 "uuid": "109b3fe8-bc9b-4e1a-9ca5-f83aa740d7bf", 00:09:56.000 "assigned_rate_limits": { 00:09:56.000 "rw_ios_per_sec": 0, 00:09:56.000 "rw_mbytes_per_sec": 0, 00:09:56.000 "r_mbytes_per_sec": 0, 00:09:56.000 "w_mbytes_per_sec": 0 00:09:56.000 }, 00:09:56.000 "claimed": false, 00:09:56.000 "zoned": false, 00:09:56.000 "supported_io_types": { 00:09:56.000 "read": true, 00:09:56.000 "write": true, 00:09:56.000 "unmap": true, 00:09:56.000 "flush": true, 00:09:56.000 "reset": true, 00:09:56.000 "nvme_admin": false, 00:09:56.000 "nvme_io": false, 00:09:56.000 "nvme_io_md": false, 00:09:56.000 "write_zeroes": true, 00:09:56.000 "zcopy": false, 00:09:56.000 "get_zone_info": false, 00:09:56.000 "zone_management": false, 00:09:56.000 "zone_append": false, 00:09:56.000 "compare": false, 00:09:56.000 "compare_and_write": false, 00:09:56.000 "abort": false, 00:09:56.000 "seek_hole": false, 00:09:56.000 "seek_data": false, 00:09:56.000 "copy": false, 00:09:56.000 "nvme_iov_md": false 00:09:56.000 }, 00:09:56.000 "memory_domains": [ 00:09:56.000 { 00:09:56.000 "dma_device_id": "system", 00:09:56.000 "dma_device_type": 1 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.000 "dma_device_type": 2 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "system", 00:09:56.000 "dma_device_type": 1 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.000 "dma_device_type": 2 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "system", 00:09:56.000 "dma_device_type": 1 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.000 "dma_device_type": 2 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "system", 00:09:56.000 "dma_device_type": 1 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.000 "dma_device_type": 2 00:09:56.000 } 00:09:56.000 ], 00:09:56.000 "driver_specific": { 00:09:56.000 "raid": { 00:09:56.000 "uuid": "109b3fe8-bc9b-4e1a-9ca5-f83aa740d7bf", 00:09:56.000 "strip_size_kb": 64, 00:09:56.000 "state": "online", 00:09:56.000 "raid_level": "raid0", 00:09:56.000 "superblock": false, 00:09:56.000 "num_base_bdevs": 4, 00:09:56.000 "num_base_bdevs_discovered": 4, 00:09:56.000 "num_base_bdevs_operational": 4, 00:09:56.000 "base_bdevs_list": [ 00:09:56.000 { 00:09:56.000 "name": "NewBaseBdev", 00:09:56.000 "uuid": "ddb0c6ff-d2be-4398-9182-ff0e943f9778", 00:09:56.000 "is_configured": true, 00:09:56.000 "data_offset": 0, 00:09:56.000 "data_size": 65536 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "name": "BaseBdev2", 00:09:56.000 "uuid": "e6ee2280-85a2-45da-abe5-6b0f706d639d", 00:09:56.000 "is_configured": true, 00:09:56.000 "data_offset": 0, 00:09:56.000 "data_size": 65536 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "name": "BaseBdev3", 00:09:56.000 "uuid": "4f65a973-5faf-4461-854e-4c9fc9bbbe07", 00:09:56.000 "is_configured": true, 00:09:56.000 "data_offset": 0, 00:09:56.000 "data_size": 65536 00:09:56.000 }, 00:09:56.000 { 00:09:56.000 "name": "BaseBdev4", 00:09:56.000 "uuid": "47ad2dd8-4126-4562-b1b1-3ba7146d7045", 00:09:56.000 "is_configured": true, 00:09:56.000 "data_offset": 0, 00:09:56.000 "data_size": 65536 00:09:56.000 } 00:09:56.000 ] 00:09:56.000 } 00:09:56.000 } 00:09:56.000 }' 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:56.000 BaseBdev2 00:09:56.000 BaseBdev3 00:09:56.000 BaseBdev4' 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:56.000 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.001 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.288 [2024-11-20 03:16:45.760236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.288 [2024-11-20 03:16:45.760268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.288 [2024-11-20 03:16:45.760350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.288 [2024-11-20 03:16:45.760420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.288 [2024-11-20 03:16:45.760431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69228 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69228 ']' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69228 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69228 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69228' 00:09:56.288 killing process with pid 69228 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69228 00:09:56.288 [2024-11-20 03:16:45.809183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.288 03:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69228 00:09:56.858 [2024-11-20 03:16:46.210907] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.797 03:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.797 00:09:57.797 real 0m11.817s 00:09:57.797 user 0m18.834s 00:09:57.797 sys 0m2.070s 00:09:57.797 ************************************ 00:09:57.797 END TEST raid_state_function_test 00:09:57.797 ************************************ 00:09:57.797 03:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.797 03:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 03:16:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:57.797 03:16:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.797 03:16:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.797 03:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.057 ************************************ 00:09:58.057 START TEST raid_state_function_test_sb 00:09:58.057 ************************************ 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69905 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69905' 00:09:58.057 Process raid pid: 69905 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69905 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69905 ']' 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.057 03:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.057 [2024-11-20 03:16:47.540263] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:58.057 [2024-11-20 03:16:47.540385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.317 [2024-11-20 03:16:47.717074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.317 [2024-11-20 03:16:47.826815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.576 [2024-11-20 03:16:48.036803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.576 [2024-11-20 03:16:48.036847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.836 [2024-11-20 03:16:48.381556] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.836 [2024-11-20 03:16:48.381684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.836 [2024-11-20 03:16:48.381716] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.836 [2024-11-20 03:16:48.381740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.836 [2024-11-20 03:16:48.381759] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.836 [2024-11-20 03:16:48.381780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.836 [2024-11-20 03:16:48.381799] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.836 [2024-11-20 03:16:48.381820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.836 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.836 "name": "Existed_Raid", 00:09:58.836 "uuid": "b1454fcc-7f69-4509-a8e6-fed069ea347d", 00:09:58.836 "strip_size_kb": 64, 00:09:58.836 "state": "configuring", 00:09:58.836 "raid_level": "raid0", 00:09:58.836 "superblock": true, 00:09:58.836 "num_base_bdevs": 4, 00:09:58.836 "num_base_bdevs_discovered": 0, 00:09:58.836 "num_base_bdevs_operational": 4, 00:09:58.836 "base_bdevs_list": [ 00:09:58.836 { 00:09:58.836 "name": "BaseBdev1", 00:09:58.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.836 "is_configured": false, 00:09:58.836 "data_offset": 0, 00:09:58.836 "data_size": 0 00:09:58.836 }, 00:09:58.836 { 00:09:58.836 "name": "BaseBdev2", 00:09:58.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.837 "is_configured": false, 00:09:58.837 "data_offset": 0, 00:09:58.837 "data_size": 0 00:09:58.837 }, 00:09:58.837 { 00:09:58.837 "name": "BaseBdev3", 00:09:58.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.837 "is_configured": false, 00:09:58.837 "data_offset": 0, 00:09:58.837 "data_size": 0 00:09:58.837 }, 00:09:58.837 { 00:09:58.837 "name": "BaseBdev4", 00:09:58.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.837 "is_configured": false, 00:09:58.837 "data_offset": 0, 00:09:58.837 "data_size": 0 00:09:58.837 } 00:09:58.837 ] 00:09:58.837 }' 00:09:58.837 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.837 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.405 [2024-11-20 03:16:48.828717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.405 [2024-11-20 03:16:48.828761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.405 [2024-11-20 03:16:48.840701] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.405 [2024-11-20 03:16:48.840742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.405 [2024-11-20 03:16:48.840751] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.405 [2024-11-20 03:16:48.840760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.405 [2024-11-20 03:16:48.840782] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.405 [2024-11-20 03:16:48.840791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.405 [2024-11-20 03:16:48.840798] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.405 [2024-11-20 03:16:48.840807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.405 [2024-11-20 03:16:48.891081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.405 BaseBdev1 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.405 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.405 [ 00:09:59.405 { 00:09:59.405 "name": "BaseBdev1", 00:09:59.405 "aliases": [ 00:09:59.405 "310aa5e4-0543-49d1-8b12-0ccea2f778b9" 00:09:59.405 ], 00:09:59.405 "product_name": "Malloc disk", 00:09:59.405 "block_size": 512, 00:09:59.405 "num_blocks": 65536, 00:09:59.405 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:09:59.405 "assigned_rate_limits": { 00:09:59.405 "rw_ios_per_sec": 0, 00:09:59.405 "rw_mbytes_per_sec": 0, 00:09:59.405 "r_mbytes_per_sec": 0, 00:09:59.405 "w_mbytes_per_sec": 0 00:09:59.405 }, 00:09:59.405 "claimed": true, 00:09:59.405 "claim_type": "exclusive_write", 00:09:59.405 "zoned": false, 00:09:59.405 "supported_io_types": { 00:09:59.405 "read": true, 00:09:59.405 "write": true, 00:09:59.405 "unmap": true, 00:09:59.405 "flush": true, 00:09:59.405 "reset": true, 00:09:59.405 "nvme_admin": false, 00:09:59.405 "nvme_io": false, 00:09:59.405 "nvme_io_md": false, 00:09:59.405 "write_zeroes": true, 00:09:59.405 "zcopy": true, 00:09:59.405 "get_zone_info": false, 00:09:59.405 "zone_management": false, 00:09:59.405 "zone_append": false, 00:09:59.405 "compare": false, 00:09:59.405 "compare_and_write": false, 00:09:59.405 "abort": true, 00:09:59.405 "seek_hole": false, 00:09:59.405 "seek_data": false, 00:09:59.405 "copy": true, 00:09:59.405 "nvme_iov_md": false 00:09:59.405 }, 00:09:59.405 "memory_domains": [ 00:09:59.405 { 00:09:59.406 "dma_device_id": "system", 00:09:59.406 "dma_device_type": 1 00:09:59.406 }, 00:09:59.406 { 00:09:59.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.406 "dma_device_type": 2 00:09:59.406 } 00:09:59.406 ], 00:09:59.406 "driver_specific": {} 00:09:59.406 } 00:09:59.406 ] 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.406 "name": "Existed_Raid", 00:09:59.406 "uuid": "a32ed47d-7f71-4ffc-a122-e3afeb46f2ae", 00:09:59.406 "strip_size_kb": 64, 00:09:59.406 "state": "configuring", 00:09:59.406 "raid_level": "raid0", 00:09:59.406 "superblock": true, 00:09:59.406 "num_base_bdevs": 4, 00:09:59.406 "num_base_bdevs_discovered": 1, 00:09:59.406 "num_base_bdevs_operational": 4, 00:09:59.406 "base_bdevs_list": [ 00:09:59.406 { 00:09:59.406 "name": "BaseBdev1", 00:09:59.406 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:09:59.406 "is_configured": true, 00:09:59.406 "data_offset": 2048, 00:09:59.406 "data_size": 63488 00:09:59.406 }, 00:09:59.406 { 00:09:59.406 "name": "BaseBdev2", 00:09:59.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.406 "is_configured": false, 00:09:59.406 "data_offset": 0, 00:09:59.406 "data_size": 0 00:09:59.406 }, 00:09:59.406 { 00:09:59.406 "name": "BaseBdev3", 00:09:59.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.406 "is_configured": false, 00:09:59.406 "data_offset": 0, 00:09:59.406 "data_size": 0 00:09:59.406 }, 00:09:59.406 { 00:09:59.406 "name": "BaseBdev4", 00:09:59.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.406 "is_configured": false, 00:09:59.406 "data_offset": 0, 00:09:59.406 "data_size": 0 00:09:59.406 } 00:09:59.406 ] 00:09:59.406 }' 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.406 03:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 [2024-11-20 03:16:49.386277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.984 [2024-11-20 03:16:49.386337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 [2024-11-20 03:16:49.398311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.984 [2024-11-20 03:16:49.400150] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.984 [2024-11-20 03:16:49.400194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.984 [2024-11-20 03:16:49.400203] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.984 [2024-11-20 03:16:49.400213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.984 [2024-11-20 03:16:49.400220] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.984 [2024-11-20 03:16:49.400228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.984 "name": "Existed_Raid", 00:09:59.984 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:09:59.984 "strip_size_kb": 64, 00:09:59.984 "state": "configuring", 00:09:59.984 "raid_level": "raid0", 00:09:59.984 "superblock": true, 00:09:59.984 "num_base_bdevs": 4, 00:09:59.984 "num_base_bdevs_discovered": 1, 00:09:59.984 "num_base_bdevs_operational": 4, 00:09:59.984 "base_bdevs_list": [ 00:09:59.984 { 00:09:59.984 "name": "BaseBdev1", 00:09:59.984 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:09:59.984 "is_configured": true, 00:09:59.984 "data_offset": 2048, 00:09:59.984 "data_size": 63488 00:09:59.984 }, 00:09:59.984 { 00:09:59.984 "name": "BaseBdev2", 00:09:59.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.984 "is_configured": false, 00:09:59.984 "data_offset": 0, 00:09:59.984 "data_size": 0 00:09:59.984 }, 00:09:59.984 { 00:09:59.984 "name": "BaseBdev3", 00:09:59.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.984 "is_configured": false, 00:09:59.984 "data_offset": 0, 00:09:59.984 "data_size": 0 00:09:59.984 }, 00:09:59.984 { 00:09:59.984 "name": "BaseBdev4", 00:09:59.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.984 "is_configured": false, 00:09:59.984 "data_offset": 0, 00:09:59.984 "data_size": 0 00:09:59.984 } 00:09:59.984 ] 00:09:59.984 }' 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.984 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.244 [2024-11-20 03:16:49.859212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.244 BaseBdev2 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.244 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.505 [ 00:10:00.505 { 00:10:00.505 "name": "BaseBdev2", 00:10:00.505 "aliases": [ 00:10:00.505 "bef0b60b-6238-4098-b71e-d16d8bebd72e" 00:10:00.505 ], 00:10:00.505 "product_name": "Malloc disk", 00:10:00.505 "block_size": 512, 00:10:00.505 "num_blocks": 65536, 00:10:00.505 "uuid": "bef0b60b-6238-4098-b71e-d16d8bebd72e", 00:10:00.505 "assigned_rate_limits": { 00:10:00.505 "rw_ios_per_sec": 0, 00:10:00.505 "rw_mbytes_per_sec": 0, 00:10:00.505 "r_mbytes_per_sec": 0, 00:10:00.505 "w_mbytes_per_sec": 0 00:10:00.505 }, 00:10:00.505 "claimed": true, 00:10:00.505 "claim_type": "exclusive_write", 00:10:00.505 "zoned": false, 00:10:00.505 "supported_io_types": { 00:10:00.505 "read": true, 00:10:00.505 "write": true, 00:10:00.505 "unmap": true, 00:10:00.505 "flush": true, 00:10:00.505 "reset": true, 00:10:00.505 "nvme_admin": false, 00:10:00.505 "nvme_io": false, 00:10:00.505 "nvme_io_md": false, 00:10:00.505 "write_zeroes": true, 00:10:00.505 "zcopy": true, 00:10:00.505 "get_zone_info": false, 00:10:00.505 "zone_management": false, 00:10:00.505 "zone_append": false, 00:10:00.505 "compare": false, 00:10:00.505 "compare_and_write": false, 00:10:00.505 "abort": true, 00:10:00.505 "seek_hole": false, 00:10:00.505 "seek_data": false, 00:10:00.505 "copy": true, 00:10:00.505 "nvme_iov_md": false 00:10:00.505 }, 00:10:00.505 "memory_domains": [ 00:10:00.505 { 00:10:00.505 "dma_device_id": "system", 00:10:00.505 "dma_device_type": 1 00:10:00.505 }, 00:10:00.505 { 00:10:00.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.505 "dma_device_type": 2 00:10:00.505 } 00:10:00.505 ], 00:10:00.505 "driver_specific": {} 00:10:00.505 } 00:10:00.505 ] 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.505 "name": "Existed_Raid", 00:10:00.505 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:10:00.505 "strip_size_kb": 64, 00:10:00.505 "state": "configuring", 00:10:00.505 "raid_level": "raid0", 00:10:00.505 "superblock": true, 00:10:00.505 "num_base_bdevs": 4, 00:10:00.505 "num_base_bdevs_discovered": 2, 00:10:00.505 "num_base_bdevs_operational": 4, 00:10:00.505 "base_bdevs_list": [ 00:10:00.505 { 00:10:00.505 "name": "BaseBdev1", 00:10:00.505 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:10:00.505 "is_configured": true, 00:10:00.505 "data_offset": 2048, 00:10:00.505 "data_size": 63488 00:10:00.505 }, 00:10:00.505 { 00:10:00.505 "name": "BaseBdev2", 00:10:00.505 "uuid": "bef0b60b-6238-4098-b71e-d16d8bebd72e", 00:10:00.505 "is_configured": true, 00:10:00.505 "data_offset": 2048, 00:10:00.505 "data_size": 63488 00:10:00.505 }, 00:10:00.505 { 00:10:00.505 "name": "BaseBdev3", 00:10:00.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.505 "is_configured": false, 00:10:00.505 "data_offset": 0, 00:10:00.505 "data_size": 0 00:10:00.505 }, 00:10:00.505 { 00:10:00.505 "name": "BaseBdev4", 00:10:00.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.505 "is_configured": false, 00:10:00.505 "data_offset": 0, 00:10:00.505 "data_size": 0 00:10:00.505 } 00:10:00.505 ] 00:10:00.505 }' 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.505 03:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.765 [2024-11-20 03:16:50.347464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.765 BaseBdev3 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.765 [ 00:10:00.765 { 00:10:00.765 "name": "BaseBdev3", 00:10:00.765 "aliases": [ 00:10:00.765 "7cc2655d-a8c9-4fb2-8d08-40f15e96488d" 00:10:00.765 ], 00:10:00.765 "product_name": "Malloc disk", 00:10:00.765 "block_size": 512, 00:10:00.765 "num_blocks": 65536, 00:10:00.765 "uuid": "7cc2655d-a8c9-4fb2-8d08-40f15e96488d", 00:10:00.765 "assigned_rate_limits": { 00:10:00.765 "rw_ios_per_sec": 0, 00:10:00.765 "rw_mbytes_per_sec": 0, 00:10:00.765 "r_mbytes_per_sec": 0, 00:10:00.765 "w_mbytes_per_sec": 0 00:10:00.765 }, 00:10:00.765 "claimed": true, 00:10:00.765 "claim_type": "exclusive_write", 00:10:00.765 "zoned": false, 00:10:00.765 "supported_io_types": { 00:10:00.765 "read": true, 00:10:00.765 "write": true, 00:10:00.765 "unmap": true, 00:10:00.765 "flush": true, 00:10:00.765 "reset": true, 00:10:00.765 "nvme_admin": false, 00:10:00.765 "nvme_io": false, 00:10:00.765 "nvme_io_md": false, 00:10:00.765 "write_zeroes": true, 00:10:00.765 "zcopy": true, 00:10:00.765 "get_zone_info": false, 00:10:00.765 "zone_management": false, 00:10:00.765 "zone_append": false, 00:10:00.765 "compare": false, 00:10:00.765 "compare_and_write": false, 00:10:00.765 "abort": true, 00:10:00.765 "seek_hole": false, 00:10:00.765 "seek_data": false, 00:10:00.765 "copy": true, 00:10:00.765 "nvme_iov_md": false 00:10:00.765 }, 00:10:00.765 "memory_domains": [ 00:10:00.765 { 00:10:00.765 "dma_device_id": "system", 00:10:00.765 "dma_device_type": 1 00:10:00.765 }, 00:10:00.765 { 00:10:00.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.765 "dma_device_type": 2 00:10:00.765 } 00:10:00.765 ], 00:10:00.765 "driver_specific": {} 00:10:00.765 } 00:10:00.765 ] 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.765 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.025 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.025 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.025 "name": "Existed_Raid", 00:10:01.025 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:10:01.025 "strip_size_kb": 64, 00:10:01.025 "state": "configuring", 00:10:01.025 "raid_level": "raid0", 00:10:01.025 "superblock": true, 00:10:01.025 "num_base_bdevs": 4, 00:10:01.025 "num_base_bdevs_discovered": 3, 00:10:01.025 "num_base_bdevs_operational": 4, 00:10:01.025 "base_bdevs_list": [ 00:10:01.025 { 00:10:01.025 "name": "BaseBdev1", 00:10:01.025 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:10:01.025 "is_configured": true, 00:10:01.025 "data_offset": 2048, 00:10:01.025 "data_size": 63488 00:10:01.025 }, 00:10:01.025 { 00:10:01.025 "name": "BaseBdev2", 00:10:01.025 "uuid": "bef0b60b-6238-4098-b71e-d16d8bebd72e", 00:10:01.025 "is_configured": true, 00:10:01.025 "data_offset": 2048, 00:10:01.025 "data_size": 63488 00:10:01.025 }, 00:10:01.025 { 00:10:01.025 "name": "BaseBdev3", 00:10:01.025 "uuid": "7cc2655d-a8c9-4fb2-8d08-40f15e96488d", 00:10:01.025 "is_configured": true, 00:10:01.025 "data_offset": 2048, 00:10:01.025 "data_size": 63488 00:10:01.025 }, 00:10:01.025 { 00:10:01.025 "name": "BaseBdev4", 00:10:01.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.025 "is_configured": false, 00:10:01.025 "data_offset": 0, 00:10:01.025 "data_size": 0 00:10:01.025 } 00:10:01.025 ] 00:10:01.025 }' 00:10:01.025 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.025 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 [2024-11-20 03:16:50.866169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.286 [2024-11-20 03:16:50.866476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.286 [2024-11-20 03:16:50.866494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:01.286 [2024-11-20 03:16:50.866794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:01.286 BaseBdev4 00:10:01.286 [2024-11-20 03:16:50.866965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.286 [2024-11-20 03:16:50.866982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:01.286 [2024-11-20 03:16:50.867132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 [ 00:10:01.286 { 00:10:01.286 "name": "BaseBdev4", 00:10:01.286 "aliases": [ 00:10:01.286 "967ee7b9-1ed3-4eed-af15-6c0cfdc56279" 00:10:01.286 ], 00:10:01.286 "product_name": "Malloc disk", 00:10:01.286 "block_size": 512, 00:10:01.286 "num_blocks": 65536, 00:10:01.286 "uuid": "967ee7b9-1ed3-4eed-af15-6c0cfdc56279", 00:10:01.286 "assigned_rate_limits": { 00:10:01.286 "rw_ios_per_sec": 0, 00:10:01.286 "rw_mbytes_per_sec": 0, 00:10:01.286 "r_mbytes_per_sec": 0, 00:10:01.286 "w_mbytes_per_sec": 0 00:10:01.286 }, 00:10:01.286 "claimed": true, 00:10:01.286 "claim_type": "exclusive_write", 00:10:01.286 "zoned": false, 00:10:01.286 "supported_io_types": { 00:10:01.286 "read": true, 00:10:01.286 "write": true, 00:10:01.286 "unmap": true, 00:10:01.286 "flush": true, 00:10:01.286 "reset": true, 00:10:01.286 "nvme_admin": false, 00:10:01.286 "nvme_io": false, 00:10:01.286 "nvme_io_md": false, 00:10:01.286 "write_zeroes": true, 00:10:01.286 "zcopy": true, 00:10:01.286 "get_zone_info": false, 00:10:01.286 "zone_management": false, 00:10:01.286 "zone_append": false, 00:10:01.286 "compare": false, 00:10:01.286 "compare_and_write": false, 00:10:01.286 "abort": true, 00:10:01.286 "seek_hole": false, 00:10:01.286 "seek_data": false, 00:10:01.286 "copy": true, 00:10:01.286 "nvme_iov_md": false 00:10:01.286 }, 00:10:01.286 "memory_domains": [ 00:10:01.286 { 00:10:01.286 "dma_device_id": "system", 00:10:01.286 "dma_device_type": 1 00:10:01.286 }, 00:10:01.286 { 00:10:01.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.286 "dma_device_type": 2 00:10:01.286 } 00:10:01.286 ], 00:10:01.286 "driver_specific": {} 00:10:01.286 } 00:10:01.286 ] 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.547 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.547 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.547 "name": "Existed_Raid", 00:10:01.547 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:10:01.547 "strip_size_kb": 64, 00:10:01.547 "state": "online", 00:10:01.547 "raid_level": "raid0", 00:10:01.547 "superblock": true, 00:10:01.547 "num_base_bdevs": 4, 00:10:01.547 "num_base_bdevs_discovered": 4, 00:10:01.547 "num_base_bdevs_operational": 4, 00:10:01.547 "base_bdevs_list": [ 00:10:01.547 { 00:10:01.547 "name": "BaseBdev1", 00:10:01.547 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:10:01.547 "is_configured": true, 00:10:01.547 "data_offset": 2048, 00:10:01.547 "data_size": 63488 00:10:01.547 }, 00:10:01.547 { 00:10:01.547 "name": "BaseBdev2", 00:10:01.547 "uuid": "bef0b60b-6238-4098-b71e-d16d8bebd72e", 00:10:01.547 "is_configured": true, 00:10:01.547 "data_offset": 2048, 00:10:01.547 "data_size": 63488 00:10:01.547 }, 00:10:01.547 { 00:10:01.547 "name": "BaseBdev3", 00:10:01.547 "uuid": "7cc2655d-a8c9-4fb2-8d08-40f15e96488d", 00:10:01.547 "is_configured": true, 00:10:01.547 "data_offset": 2048, 00:10:01.547 "data_size": 63488 00:10:01.547 }, 00:10:01.547 { 00:10:01.547 "name": "BaseBdev4", 00:10:01.547 "uuid": "967ee7b9-1ed3-4eed-af15-6c0cfdc56279", 00:10:01.547 "is_configured": true, 00:10:01.547 "data_offset": 2048, 00:10:01.547 "data_size": 63488 00:10:01.547 } 00:10:01.547 ] 00:10:01.547 }' 00:10:01.547 03:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.547 03:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.807 [2024-11-20 03:16:51.345784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.807 "name": "Existed_Raid", 00:10:01.807 "aliases": [ 00:10:01.807 "3d467689-86e0-4368-ae24-1a5cbc99a2f4" 00:10:01.807 ], 00:10:01.807 "product_name": "Raid Volume", 00:10:01.807 "block_size": 512, 00:10:01.807 "num_blocks": 253952, 00:10:01.807 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:10:01.807 "assigned_rate_limits": { 00:10:01.807 "rw_ios_per_sec": 0, 00:10:01.807 "rw_mbytes_per_sec": 0, 00:10:01.807 "r_mbytes_per_sec": 0, 00:10:01.807 "w_mbytes_per_sec": 0 00:10:01.807 }, 00:10:01.807 "claimed": false, 00:10:01.807 "zoned": false, 00:10:01.807 "supported_io_types": { 00:10:01.807 "read": true, 00:10:01.807 "write": true, 00:10:01.807 "unmap": true, 00:10:01.807 "flush": true, 00:10:01.807 "reset": true, 00:10:01.807 "nvme_admin": false, 00:10:01.807 "nvme_io": false, 00:10:01.807 "nvme_io_md": false, 00:10:01.807 "write_zeroes": true, 00:10:01.807 "zcopy": false, 00:10:01.807 "get_zone_info": false, 00:10:01.807 "zone_management": false, 00:10:01.807 "zone_append": false, 00:10:01.807 "compare": false, 00:10:01.807 "compare_and_write": false, 00:10:01.807 "abort": false, 00:10:01.807 "seek_hole": false, 00:10:01.807 "seek_data": false, 00:10:01.807 "copy": false, 00:10:01.807 "nvme_iov_md": false 00:10:01.807 }, 00:10:01.807 "memory_domains": [ 00:10:01.807 { 00:10:01.807 "dma_device_id": "system", 00:10:01.807 "dma_device_type": 1 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.807 "dma_device_type": 2 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "system", 00:10:01.807 "dma_device_type": 1 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.807 "dma_device_type": 2 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "system", 00:10:01.807 "dma_device_type": 1 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.807 "dma_device_type": 2 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "system", 00:10:01.807 "dma_device_type": 1 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.807 "dma_device_type": 2 00:10:01.807 } 00:10:01.807 ], 00:10:01.807 "driver_specific": { 00:10:01.807 "raid": { 00:10:01.807 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:10:01.807 "strip_size_kb": 64, 00:10:01.807 "state": "online", 00:10:01.807 "raid_level": "raid0", 00:10:01.807 "superblock": true, 00:10:01.807 "num_base_bdevs": 4, 00:10:01.807 "num_base_bdevs_discovered": 4, 00:10:01.807 "num_base_bdevs_operational": 4, 00:10:01.807 "base_bdevs_list": [ 00:10:01.807 { 00:10:01.807 "name": "BaseBdev1", 00:10:01.807 "uuid": "310aa5e4-0543-49d1-8b12-0ccea2f778b9", 00:10:01.807 "is_configured": true, 00:10:01.807 "data_offset": 2048, 00:10:01.807 "data_size": 63488 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "name": "BaseBdev2", 00:10:01.807 "uuid": "bef0b60b-6238-4098-b71e-d16d8bebd72e", 00:10:01.807 "is_configured": true, 00:10:01.807 "data_offset": 2048, 00:10:01.807 "data_size": 63488 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "name": "BaseBdev3", 00:10:01.807 "uuid": "7cc2655d-a8c9-4fb2-8d08-40f15e96488d", 00:10:01.807 "is_configured": true, 00:10:01.807 "data_offset": 2048, 00:10:01.807 "data_size": 63488 00:10:01.807 }, 00:10:01.807 { 00:10:01.807 "name": "BaseBdev4", 00:10:01.807 "uuid": "967ee7b9-1ed3-4eed-af15-6c0cfdc56279", 00:10:01.807 "is_configured": true, 00:10:01.807 "data_offset": 2048, 00:10:01.807 "data_size": 63488 00:10:01.807 } 00:10:01.807 ] 00:10:01.807 } 00:10:01.807 } 00:10:01.807 }' 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.807 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.807 BaseBdev2 00:10:01.807 BaseBdev3 00:10:01.808 BaseBdev4' 00:10:01.808 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 [2024-11-20 03:16:51.652956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.068 [2024-11-20 03:16:51.652991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.068 [2024-11-20 03:16:51.653046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.328 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.328 "name": "Existed_Raid", 00:10:02.328 "uuid": "3d467689-86e0-4368-ae24-1a5cbc99a2f4", 00:10:02.328 "strip_size_kb": 64, 00:10:02.328 "state": "offline", 00:10:02.328 "raid_level": "raid0", 00:10:02.329 "superblock": true, 00:10:02.329 "num_base_bdevs": 4, 00:10:02.329 "num_base_bdevs_discovered": 3, 00:10:02.329 "num_base_bdevs_operational": 3, 00:10:02.329 "base_bdevs_list": [ 00:10:02.329 { 00:10:02.329 "name": null, 00:10:02.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.329 "is_configured": false, 00:10:02.329 "data_offset": 0, 00:10:02.329 "data_size": 63488 00:10:02.329 }, 00:10:02.329 { 00:10:02.329 "name": "BaseBdev2", 00:10:02.329 "uuid": "bef0b60b-6238-4098-b71e-d16d8bebd72e", 00:10:02.329 "is_configured": true, 00:10:02.329 "data_offset": 2048, 00:10:02.329 "data_size": 63488 00:10:02.329 }, 00:10:02.329 { 00:10:02.329 "name": "BaseBdev3", 00:10:02.329 "uuid": "7cc2655d-a8c9-4fb2-8d08-40f15e96488d", 00:10:02.329 "is_configured": true, 00:10:02.329 "data_offset": 2048, 00:10:02.329 "data_size": 63488 00:10:02.329 }, 00:10:02.329 { 00:10:02.329 "name": "BaseBdev4", 00:10:02.329 "uuid": "967ee7b9-1ed3-4eed-af15-6c0cfdc56279", 00:10:02.329 "is_configured": true, 00:10:02.329 "data_offset": 2048, 00:10:02.329 "data_size": 63488 00:10:02.329 } 00:10:02.329 ] 00:10:02.329 }' 00:10:02.329 03:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.329 03:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.588 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.848 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.848 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.848 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.848 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.848 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.849 [2024-11-20 03:16:52.237478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.849 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.849 [2024-11-20 03:16:52.388561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 [2024-11-20 03:16:52.531108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:03.110 [2024-11-20 03:16:52.531165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 BaseBdev2 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.110 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.370 [ 00:10:03.370 { 00:10:03.370 "name": "BaseBdev2", 00:10:03.370 "aliases": [ 00:10:03.370 "e5ea8c57-7153-408b-ad25-4e38e12ff6de" 00:10:03.370 ], 00:10:03.370 "product_name": "Malloc disk", 00:10:03.370 "block_size": 512, 00:10:03.370 "num_blocks": 65536, 00:10:03.370 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:03.370 "assigned_rate_limits": { 00:10:03.370 "rw_ios_per_sec": 0, 00:10:03.370 "rw_mbytes_per_sec": 0, 00:10:03.370 "r_mbytes_per_sec": 0, 00:10:03.370 "w_mbytes_per_sec": 0 00:10:03.370 }, 00:10:03.370 "claimed": false, 00:10:03.370 "zoned": false, 00:10:03.370 "supported_io_types": { 00:10:03.370 "read": true, 00:10:03.370 "write": true, 00:10:03.370 "unmap": true, 00:10:03.370 "flush": true, 00:10:03.370 "reset": true, 00:10:03.370 "nvme_admin": false, 00:10:03.370 "nvme_io": false, 00:10:03.370 "nvme_io_md": false, 00:10:03.370 "write_zeroes": true, 00:10:03.371 "zcopy": true, 00:10:03.371 "get_zone_info": false, 00:10:03.371 "zone_management": false, 00:10:03.371 "zone_append": false, 00:10:03.371 "compare": false, 00:10:03.371 "compare_and_write": false, 00:10:03.371 "abort": true, 00:10:03.371 "seek_hole": false, 00:10:03.371 "seek_data": false, 00:10:03.371 "copy": true, 00:10:03.371 "nvme_iov_md": false 00:10:03.371 }, 00:10:03.371 "memory_domains": [ 00:10:03.371 { 00:10:03.371 "dma_device_id": "system", 00:10:03.371 "dma_device_type": 1 00:10:03.371 }, 00:10:03.371 { 00:10:03.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.371 "dma_device_type": 2 00:10:03.371 } 00:10:03.371 ], 00:10:03.371 "driver_specific": {} 00:10:03.371 } 00:10:03.371 ] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 BaseBdev3 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 [ 00:10:03.371 { 00:10:03.371 "name": "BaseBdev3", 00:10:03.371 "aliases": [ 00:10:03.371 "d110afdd-9d82-4d29-97c0-be7f93a2415b" 00:10:03.371 ], 00:10:03.371 "product_name": "Malloc disk", 00:10:03.371 "block_size": 512, 00:10:03.371 "num_blocks": 65536, 00:10:03.371 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:03.371 "assigned_rate_limits": { 00:10:03.371 "rw_ios_per_sec": 0, 00:10:03.371 "rw_mbytes_per_sec": 0, 00:10:03.371 "r_mbytes_per_sec": 0, 00:10:03.371 "w_mbytes_per_sec": 0 00:10:03.371 }, 00:10:03.371 "claimed": false, 00:10:03.371 "zoned": false, 00:10:03.371 "supported_io_types": { 00:10:03.371 "read": true, 00:10:03.371 "write": true, 00:10:03.371 "unmap": true, 00:10:03.371 "flush": true, 00:10:03.371 "reset": true, 00:10:03.371 "nvme_admin": false, 00:10:03.371 "nvme_io": false, 00:10:03.371 "nvme_io_md": false, 00:10:03.371 "write_zeroes": true, 00:10:03.371 "zcopy": true, 00:10:03.371 "get_zone_info": false, 00:10:03.371 "zone_management": false, 00:10:03.371 "zone_append": false, 00:10:03.371 "compare": false, 00:10:03.371 "compare_and_write": false, 00:10:03.371 "abort": true, 00:10:03.371 "seek_hole": false, 00:10:03.371 "seek_data": false, 00:10:03.371 "copy": true, 00:10:03.371 "nvme_iov_md": false 00:10:03.371 }, 00:10:03.371 "memory_domains": [ 00:10:03.371 { 00:10:03.371 "dma_device_id": "system", 00:10:03.371 "dma_device_type": 1 00:10:03.371 }, 00:10:03.371 { 00:10:03.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.371 "dma_device_type": 2 00:10:03.371 } 00:10:03.371 ], 00:10:03.371 "driver_specific": {} 00:10:03.371 } 00:10:03.371 ] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 BaseBdev4 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 [ 00:10:03.371 { 00:10:03.371 "name": "BaseBdev4", 00:10:03.371 "aliases": [ 00:10:03.371 "fb9323e0-0536-4c25-b679-8d12b92c46ba" 00:10:03.371 ], 00:10:03.371 "product_name": "Malloc disk", 00:10:03.371 "block_size": 512, 00:10:03.371 "num_blocks": 65536, 00:10:03.371 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:03.371 "assigned_rate_limits": { 00:10:03.371 "rw_ios_per_sec": 0, 00:10:03.371 "rw_mbytes_per_sec": 0, 00:10:03.371 "r_mbytes_per_sec": 0, 00:10:03.371 "w_mbytes_per_sec": 0 00:10:03.371 }, 00:10:03.371 "claimed": false, 00:10:03.371 "zoned": false, 00:10:03.371 "supported_io_types": { 00:10:03.371 "read": true, 00:10:03.371 "write": true, 00:10:03.371 "unmap": true, 00:10:03.371 "flush": true, 00:10:03.371 "reset": true, 00:10:03.371 "nvme_admin": false, 00:10:03.371 "nvme_io": false, 00:10:03.371 "nvme_io_md": false, 00:10:03.371 "write_zeroes": true, 00:10:03.371 "zcopy": true, 00:10:03.371 "get_zone_info": false, 00:10:03.371 "zone_management": false, 00:10:03.371 "zone_append": false, 00:10:03.371 "compare": false, 00:10:03.371 "compare_and_write": false, 00:10:03.371 "abort": true, 00:10:03.371 "seek_hole": false, 00:10:03.371 "seek_data": false, 00:10:03.371 "copy": true, 00:10:03.371 "nvme_iov_md": false 00:10:03.371 }, 00:10:03.371 "memory_domains": [ 00:10:03.371 { 00:10:03.371 "dma_device_id": "system", 00:10:03.371 "dma_device_type": 1 00:10:03.371 }, 00:10:03.371 { 00:10:03.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.371 "dma_device_type": 2 00:10:03.371 } 00:10:03.371 ], 00:10:03.371 "driver_specific": {} 00:10:03.371 } 00:10:03.371 ] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.371 [2024-11-20 03:16:52.934488] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.371 [2024-11-20 03:16:52.934553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.371 [2024-11-20 03:16:52.934595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.371 [2024-11-20 03:16:52.936566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.371 [2024-11-20 03:16:52.936636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.371 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.372 "name": "Existed_Raid", 00:10:03.372 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:03.372 "strip_size_kb": 64, 00:10:03.372 "state": "configuring", 00:10:03.372 "raid_level": "raid0", 00:10:03.372 "superblock": true, 00:10:03.372 "num_base_bdevs": 4, 00:10:03.372 "num_base_bdevs_discovered": 3, 00:10:03.372 "num_base_bdevs_operational": 4, 00:10:03.372 "base_bdevs_list": [ 00:10:03.372 { 00:10:03.372 "name": "BaseBdev1", 00:10:03.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.372 "is_configured": false, 00:10:03.372 "data_offset": 0, 00:10:03.372 "data_size": 0 00:10:03.372 }, 00:10:03.372 { 00:10:03.372 "name": "BaseBdev2", 00:10:03.372 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:03.372 "is_configured": true, 00:10:03.372 "data_offset": 2048, 00:10:03.372 "data_size": 63488 00:10:03.372 }, 00:10:03.372 { 00:10:03.372 "name": "BaseBdev3", 00:10:03.372 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:03.372 "is_configured": true, 00:10:03.372 "data_offset": 2048, 00:10:03.372 "data_size": 63488 00:10:03.372 }, 00:10:03.372 { 00:10:03.372 "name": "BaseBdev4", 00:10:03.372 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:03.372 "is_configured": true, 00:10:03.372 "data_offset": 2048, 00:10:03.372 "data_size": 63488 00:10:03.372 } 00:10:03.372 ] 00:10:03.372 }' 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.372 03:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.941 [2024-11-20 03:16:53.409697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.941 "name": "Existed_Raid", 00:10:03.941 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:03.941 "strip_size_kb": 64, 00:10:03.941 "state": "configuring", 00:10:03.941 "raid_level": "raid0", 00:10:03.941 "superblock": true, 00:10:03.941 "num_base_bdevs": 4, 00:10:03.941 "num_base_bdevs_discovered": 2, 00:10:03.941 "num_base_bdevs_operational": 4, 00:10:03.941 "base_bdevs_list": [ 00:10:03.941 { 00:10:03.941 "name": "BaseBdev1", 00:10:03.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.941 "is_configured": false, 00:10:03.941 "data_offset": 0, 00:10:03.941 "data_size": 0 00:10:03.941 }, 00:10:03.941 { 00:10:03.941 "name": null, 00:10:03.941 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:03.941 "is_configured": false, 00:10:03.941 "data_offset": 0, 00:10:03.941 "data_size": 63488 00:10:03.941 }, 00:10:03.941 { 00:10:03.941 "name": "BaseBdev3", 00:10:03.941 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:03.941 "is_configured": true, 00:10:03.941 "data_offset": 2048, 00:10:03.941 "data_size": 63488 00:10:03.941 }, 00:10:03.941 { 00:10:03.941 "name": "BaseBdev4", 00:10:03.941 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:03.941 "is_configured": true, 00:10:03.941 "data_offset": 2048, 00:10:03.941 "data_size": 63488 00:10:03.941 } 00:10:03.941 ] 00:10:03.941 }' 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.941 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 [2024-11-20 03:16:53.954226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.510 BaseBdev1 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 [ 00:10:04.510 { 00:10:04.510 "name": "BaseBdev1", 00:10:04.510 "aliases": [ 00:10:04.510 "11d42d66-aee1-4597-b264-dfb3c55dc63e" 00:10:04.510 ], 00:10:04.510 "product_name": "Malloc disk", 00:10:04.510 "block_size": 512, 00:10:04.510 "num_blocks": 65536, 00:10:04.510 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:04.510 "assigned_rate_limits": { 00:10:04.510 "rw_ios_per_sec": 0, 00:10:04.510 "rw_mbytes_per_sec": 0, 00:10:04.510 "r_mbytes_per_sec": 0, 00:10:04.510 "w_mbytes_per_sec": 0 00:10:04.510 }, 00:10:04.510 "claimed": true, 00:10:04.510 "claim_type": "exclusive_write", 00:10:04.510 "zoned": false, 00:10:04.510 "supported_io_types": { 00:10:04.510 "read": true, 00:10:04.510 "write": true, 00:10:04.510 "unmap": true, 00:10:04.510 "flush": true, 00:10:04.510 "reset": true, 00:10:04.510 "nvme_admin": false, 00:10:04.510 "nvme_io": false, 00:10:04.510 "nvme_io_md": false, 00:10:04.510 "write_zeroes": true, 00:10:04.510 "zcopy": true, 00:10:04.510 "get_zone_info": false, 00:10:04.510 "zone_management": false, 00:10:04.510 "zone_append": false, 00:10:04.510 "compare": false, 00:10:04.510 "compare_and_write": false, 00:10:04.510 "abort": true, 00:10:04.510 "seek_hole": false, 00:10:04.510 "seek_data": false, 00:10:04.510 "copy": true, 00:10:04.510 "nvme_iov_md": false 00:10:04.510 }, 00:10:04.510 "memory_domains": [ 00:10:04.510 { 00:10:04.510 "dma_device_id": "system", 00:10:04.510 "dma_device_type": 1 00:10:04.510 }, 00:10:04.510 { 00:10:04.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.510 "dma_device_type": 2 00:10:04.510 } 00:10:04.510 ], 00:10:04.510 "driver_specific": {} 00:10:04.510 } 00:10:04.510 ] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.510 03:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.510 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.510 "name": "Existed_Raid", 00:10:04.510 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:04.510 "strip_size_kb": 64, 00:10:04.510 "state": "configuring", 00:10:04.510 "raid_level": "raid0", 00:10:04.510 "superblock": true, 00:10:04.510 "num_base_bdevs": 4, 00:10:04.510 "num_base_bdevs_discovered": 3, 00:10:04.510 "num_base_bdevs_operational": 4, 00:10:04.510 "base_bdevs_list": [ 00:10:04.510 { 00:10:04.510 "name": "BaseBdev1", 00:10:04.510 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:04.510 "is_configured": true, 00:10:04.510 "data_offset": 2048, 00:10:04.510 "data_size": 63488 00:10:04.510 }, 00:10:04.510 { 00:10:04.510 "name": null, 00:10:04.510 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:04.510 "is_configured": false, 00:10:04.510 "data_offset": 0, 00:10:04.510 "data_size": 63488 00:10:04.510 }, 00:10:04.510 { 00:10:04.510 "name": "BaseBdev3", 00:10:04.510 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:04.510 "is_configured": true, 00:10:04.510 "data_offset": 2048, 00:10:04.510 "data_size": 63488 00:10:04.510 }, 00:10:04.510 { 00:10:04.510 "name": "BaseBdev4", 00:10:04.510 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:04.510 "is_configured": true, 00:10:04.510 "data_offset": 2048, 00:10:04.510 "data_size": 63488 00:10:04.510 } 00:10:04.510 ] 00:10:04.510 }' 00:10:04.510 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.510 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.080 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.080 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.080 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.080 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.080 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 [2024-11-20 03:16:54.453544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.081 "name": "Existed_Raid", 00:10:05.081 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:05.081 "strip_size_kb": 64, 00:10:05.081 "state": "configuring", 00:10:05.081 "raid_level": "raid0", 00:10:05.081 "superblock": true, 00:10:05.081 "num_base_bdevs": 4, 00:10:05.081 "num_base_bdevs_discovered": 2, 00:10:05.081 "num_base_bdevs_operational": 4, 00:10:05.081 "base_bdevs_list": [ 00:10:05.081 { 00:10:05.081 "name": "BaseBdev1", 00:10:05.081 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:05.081 "is_configured": true, 00:10:05.081 "data_offset": 2048, 00:10:05.081 "data_size": 63488 00:10:05.081 }, 00:10:05.081 { 00:10:05.081 "name": null, 00:10:05.081 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:05.081 "is_configured": false, 00:10:05.081 "data_offset": 0, 00:10:05.081 "data_size": 63488 00:10:05.081 }, 00:10:05.081 { 00:10:05.081 "name": null, 00:10:05.081 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:05.081 "is_configured": false, 00:10:05.081 "data_offset": 0, 00:10:05.081 "data_size": 63488 00:10:05.081 }, 00:10:05.081 { 00:10:05.081 "name": "BaseBdev4", 00:10:05.081 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:05.081 "is_configured": true, 00:10:05.081 "data_offset": 2048, 00:10:05.081 "data_size": 63488 00:10:05.081 } 00:10:05.081 ] 00:10:05.081 }' 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.081 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.341 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.341 [2024-11-20 03:16:54.968692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.601 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.601 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.601 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.601 "name": "Existed_Raid", 00:10:05.601 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:05.601 "strip_size_kb": 64, 00:10:05.602 "state": "configuring", 00:10:05.602 "raid_level": "raid0", 00:10:05.602 "superblock": true, 00:10:05.602 "num_base_bdevs": 4, 00:10:05.602 "num_base_bdevs_discovered": 3, 00:10:05.602 "num_base_bdevs_operational": 4, 00:10:05.602 "base_bdevs_list": [ 00:10:05.602 { 00:10:05.602 "name": "BaseBdev1", 00:10:05.602 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:05.602 "is_configured": true, 00:10:05.602 "data_offset": 2048, 00:10:05.602 "data_size": 63488 00:10:05.602 }, 00:10:05.602 { 00:10:05.602 "name": null, 00:10:05.602 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:05.602 "is_configured": false, 00:10:05.602 "data_offset": 0, 00:10:05.602 "data_size": 63488 00:10:05.602 }, 00:10:05.602 { 00:10:05.602 "name": "BaseBdev3", 00:10:05.602 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:05.602 "is_configured": true, 00:10:05.602 "data_offset": 2048, 00:10:05.602 "data_size": 63488 00:10:05.602 }, 00:10:05.602 { 00:10:05.602 "name": "BaseBdev4", 00:10:05.602 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:05.602 "is_configured": true, 00:10:05.602 "data_offset": 2048, 00:10:05.602 "data_size": 63488 00:10:05.602 } 00:10:05.602 ] 00:10:05.602 }' 00:10:05.602 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.602 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.864 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 [2024-11-20 03:16:55.407932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.123 "name": "Existed_Raid", 00:10:06.123 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:06.123 "strip_size_kb": 64, 00:10:06.123 "state": "configuring", 00:10:06.123 "raid_level": "raid0", 00:10:06.123 "superblock": true, 00:10:06.123 "num_base_bdevs": 4, 00:10:06.123 "num_base_bdevs_discovered": 2, 00:10:06.123 "num_base_bdevs_operational": 4, 00:10:06.123 "base_bdevs_list": [ 00:10:06.123 { 00:10:06.123 "name": null, 00:10:06.123 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:06.123 "is_configured": false, 00:10:06.123 "data_offset": 0, 00:10:06.123 "data_size": 63488 00:10:06.123 }, 00:10:06.123 { 00:10:06.123 "name": null, 00:10:06.123 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:06.123 "is_configured": false, 00:10:06.123 "data_offset": 0, 00:10:06.123 "data_size": 63488 00:10:06.123 }, 00:10:06.123 { 00:10:06.123 "name": "BaseBdev3", 00:10:06.123 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:06.123 "is_configured": true, 00:10:06.123 "data_offset": 2048, 00:10:06.123 "data_size": 63488 00:10:06.123 }, 00:10:06.123 { 00:10:06.123 "name": "BaseBdev4", 00:10:06.123 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:06.123 "is_configured": true, 00:10:06.123 "data_offset": 2048, 00:10:06.123 "data_size": 63488 00:10:06.123 } 00:10:06.123 ] 00:10:06.123 }' 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.123 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.382 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.382 [2024-11-20 03:16:56.003855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.382 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.641 "name": "Existed_Raid", 00:10:06.641 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:06.641 "strip_size_kb": 64, 00:10:06.641 "state": "configuring", 00:10:06.641 "raid_level": "raid0", 00:10:06.641 "superblock": true, 00:10:06.641 "num_base_bdevs": 4, 00:10:06.641 "num_base_bdevs_discovered": 3, 00:10:06.641 "num_base_bdevs_operational": 4, 00:10:06.641 "base_bdevs_list": [ 00:10:06.641 { 00:10:06.641 "name": null, 00:10:06.641 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:06.641 "is_configured": false, 00:10:06.641 "data_offset": 0, 00:10:06.641 "data_size": 63488 00:10:06.641 }, 00:10:06.641 { 00:10:06.641 "name": "BaseBdev2", 00:10:06.641 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:06.641 "is_configured": true, 00:10:06.641 "data_offset": 2048, 00:10:06.641 "data_size": 63488 00:10:06.641 }, 00:10:06.641 { 00:10:06.641 "name": "BaseBdev3", 00:10:06.641 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:06.641 "is_configured": true, 00:10:06.641 "data_offset": 2048, 00:10:06.641 "data_size": 63488 00:10:06.641 }, 00:10:06.641 { 00:10:06.641 "name": "BaseBdev4", 00:10:06.641 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:06.641 "is_configured": true, 00:10:06.641 "data_offset": 2048, 00:10:06.641 "data_size": 63488 00:10:06.641 } 00:10:06.641 ] 00:10:06.641 }' 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.641 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 11d42d66-aee1-4597-b264-dfb3c55dc63e 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.902 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.162 [2024-11-20 03:16:56.571465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:07.162 [2024-11-20 03:16:56.571741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:07.162 [2024-11-20 03:16:56.571767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:07.163 [2024-11-20 03:16:56.572045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:07.163 [2024-11-20 03:16:56.572192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:07.163 [2024-11-20 03:16:56.572205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:07.163 [2024-11-20 03:16:56.572329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.163 NewBaseBdev 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 [ 00:10:07.163 { 00:10:07.163 "name": "NewBaseBdev", 00:10:07.163 "aliases": [ 00:10:07.163 "11d42d66-aee1-4597-b264-dfb3c55dc63e" 00:10:07.163 ], 00:10:07.163 "product_name": "Malloc disk", 00:10:07.163 "block_size": 512, 00:10:07.163 "num_blocks": 65536, 00:10:07.163 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:07.163 "assigned_rate_limits": { 00:10:07.163 "rw_ios_per_sec": 0, 00:10:07.163 "rw_mbytes_per_sec": 0, 00:10:07.163 "r_mbytes_per_sec": 0, 00:10:07.163 "w_mbytes_per_sec": 0 00:10:07.163 }, 00:10:07.163 "claimed": true, 00:10:07.163 "claim_type": "exclusive_write", 00:10:07.163 "zoned": false, 00:10:07.163 "supported_io_types": { 00:10:07.163 "read": true, 00:10:07.163 "write": true, 00:10:07.163 "unmap": true, 00:10:07.163 "flush": true, 00:10:07.163 "reset": true, 00:10:07.163 "nvme_admin": false, 00:10:07.163 "nvme_io": false, 00:10:07.163 "nvme_io_md": false, 00:10:07.163 "write_zeroes": true, 00:10:07.163 "zcopy": true, 00:10:07.163 "get_zone_info": false, 00:10:07.163 "zone_management": false, 00:10:07.163 "zone_append": false, 00:10:07.163 "compare": false, 00:10:07.163 "compare_and_write": false, 00:10:07.163 "abort": true, 00:10:07.163 "seek_hole": false, 00:10:07.163 "seek_data": false, 00:10:07.163 "copy": true, 00:10:07.163 "nvme_iov_md": false 00:10:07.163 }, 00:10:07.163 "memory_domains": [ 00:10:07.163 { 00:10:07.163 "dma_device_id": "system", 00:10:07.163 "dma_device_type": 1 00:10:07.163 }, 00:10:07.163 { 00:10:07.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.163 "dma_device_type": 2 00:10:07.163 } 00:10:07.163 ], 00:10:07.163 "driver_specific": {} 00:10:07.163 } 00:10:07.163 ] 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.163 "name": "Existed_Raid", 00:10:07.163 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:07.163 "strip_size_kb": 64, 00:10:07.163 "state": "online", 00:10:07.163 "raid_level": "raid0", 00:10:07.163 "superblock": true, 00:10:07.163 "num_base_bdevs": 4, 00:10:07.163 "num_base_bdevs_discovered": 4, 00:10:07.163 "num_base_bdevs_operational": 4, 00:10:07.163 "base_bdevs_list": [ 00:10:07.163 { 00:10:07.163 "name": "NewBaseBdev", 00:10:07.163 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:07.163 "is_configured": true, 00:10:07.163 "data_offset": 2048, 00:10:07.163 "data_size": 63488 00:10:07.163 }, 00:10:07.163 { 00:10:07.163 "name": "BaseBdev2", 00:10:07.163 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:07.163 "is_configured": true, 00:10:07.163 "data_offset": 2048, 00:10:07.163 "data_size": 63488 00:10:07.163 }, 00:10:07.163 { 00:10:07.163 "name": "BaseBdev3", 00:10:07.163 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:07.163 "is_configured": true, 00:10:07.163 "data_offset": 2048, 00:10:07.163 "data_size": 63488 00:10:07.163 }, 00:10:07.163 { 00:10:07.163 "name": "BaseBdev4", 00:10:07.163 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:07.163 "is_configured": true, 00:10:07.163 "data_offset": 2048, 00:10:07.163 "data_size": 63488 00:10:07.163 } 00:10:07.163 ] 00:10:07.163 }' 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.163 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.745 [2024-11-20 03:16:57.099002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.745 "name": "Existed_Raid", 00:10:07.745 "aliases": [ 00:10:07.745 "ba18de38-315d-4543-bbcb-9883fab35c53" 00:10:07.745 ], 00:10:07.745 "product_name": "Raid Volume", 00:10:07.745 "block_size": 512, 00:10:07.745 "num_blocks": 253952, 00:10:07.745 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:07.745 "assigned_rate_limits": { 00:10:07.745 "rw_ios_per_sec": 0, 00:10:07.745 "rw_mbytes_per_sec": 0, 00:10:07.745 "r_mbytes_per_sec": 0, 00:10:07.745 "w_mbytes_per_sec": 0 00:10:07.745 }, 00:10:07.745 "claimed": false, 00:10:07.745 "zoned": false, 00:10:07.745 "supported_io_types": { 00:10:07.745 "read": true, 00:10:07.745 "write": true, 00:10:07.745 "unmap": true, 00:10:07.745 "flush": true, 00:10:07.745 "reset": true, 00:10:07.745 "nvme_admin": false, 00:10:07.745 "nvme_io": false, 00:10:07.745 "nvme_io_md": false, 00:10:07.745 "write_zeroes": true, 00:10:07.745 "zcopy": false, 00:10:07.745 "get_zone_info": false, 00:10:07.745 "zone_management": false, 00:10:07.745 "zone_append": false, 00:10:07.745 "compare": false, 00:10:07.745 "compare_and_write": false, 00:10:07.745 "abort": false, 00:10:07.745 "seek_hole": false, 00:10:07.745 "seek_data": false, 00:10:07.745 "copy": false, 00:10:07.745 "nvme_iov_md": false 00:10:07.745 }, 00:10:07.745 "memory_domains": [ 00:10:07.745 { 00:10:07.745 "dma_device_id": "system", 00:10:07.745 "dma_device_type": 1 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.745 "dma_device_type": 2 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "system", 00:10:07.745 "dma_device_type": 1 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.745 "dma_device_type": 2 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "system", 00:10:07.745 "dma_device_type": 1 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.745 "dma_device_type": 2 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "system", 00:10:07.745 "dma_device_type": 1 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.745 "dma_device_type": 2 00:10:07.745 } 00:10:07.746 ], 00:10:07.746 "driver_specific": { 00:10:07.746 "raid": { 00:10:07.746 "uuid": "ba18de38-315d-4543-bbcb-9883fab35c53", 00:10:07.746 "strip_size_kb": 64, 00:10:07.746 "state": "online", 00:10:07.746 "raid_level": "raid0", 00:10:07.746 "superblock": true, 00:10:07.746 "num_base_bdevs": 4, 00:10:07.746 "num_base_bdevs_discovered": 4, 00:10:07.746 "num_base_bdevs_operational": 4, 00:10:07.746 "base_bdevs_list": [ 00:10:07.746 { 00:10:07.746 "name": "NewBaseBdev", 00:10:07.746 "uuid": "11d42d66-aee1-4597-b264-dfb3c55dc63e", 00:10:07.746 "is_configured": true, 00:10:07.746 "data_offset": 2048, 00:10:07.746 "data_size": 63488 00:10:07.746 }, 00:10:07.746 { 00:10:07.746 "name": "BaseBdev2", 00:10:07.746 "uuid": "e5ea8c57-7153-408b-ad25-4e38e12ff6de", 00:10:07.746 "is_configured": true, 00:10:07.746 "data_offset": 2048, 00:10:07.746 "data_size": 63488 00:10:07.746 }, 00:10:07.746 { 00:10:07.746 "name": "BaseBdev3", 00:10:07.746 "uuid": "d110afdd-9d82-4d29-97c0-be7f93a2415b", 00:10:07.746 "is_configured": true, 00:10:07.746 "data_offset": 2048, 00:10:07.746 "data_size": 63488 00:10:07.746 }, 00:10:07.746 { 00:10:07.746 "name": "BaseBdev4", 00:10:07.746 "uuid": "fb9323e0-0536-4c25-b679-8d12b92c46ba", 00:10:07.746 "is_configured": true, 00:10:07.746 "data_offset": 2048, 00:10:07.746 "data_size": 63488 00:10:07.746 } 00:10:07.746 ] 00:10:07.746 } 00:10:07.746 } 00:10:07.746 }' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.746 BaseBdev2 00:10:07.746 BaseBdev3 00:10:07.746 BaseBdev4' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.746 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.005 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.005 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.005 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.005 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.005 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.005 [2024-11-20 03:16:57.402128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.005 [2024-11-20 03:16:57.402167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.005 [2024-11-20 03:16:57.402260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.005 [2024-11-20 03:16:57.402328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.005 [2024-11-20 03:16:57.402345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69905 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69905 ']' 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69905 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69905 00:10:08.006 killing process with pid 69905 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69905' 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69905 00:10:08.006 [2024-11-20 03:16:57.450376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.006 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69905 00:10:08.265 [2024-11-20 03:16:57.845212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.643 03:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.643 00:10:09.643 real 0m11.518s 00:10:09.643 user 0m18.294s 00:10:09.643 sys 0m2.071s 00:10:09.643 03:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.643 03:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.643 ************************************ 00:10:09.643 END TEST raid_state_function_test_sb 00:10:09.643 ************************************ 00:10:09.643 03:16:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:09.643 03:16:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.643 03:16:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.643 03:16:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.643 ************************************ 00:10:09.643 START TEST raid_superblock_test 00:10:09.643 ************************************ 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:09.643 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70570 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70570 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70570 ']' 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.644 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 [2024-11-20 03:16:59.125494] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:09.644 [2024-11-20 03:16:59.125641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70570 ] 00:10:09.902 [2024-11-20 03:16:59.301786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.902 [2024-11-20 03:16:59.420385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.160 [2024-11-20 03:16:59.635145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.160 [2024-11-20 03:16:59.635214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.419 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.420 malloc1 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.420 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.420 [2024-11-20 03:17:00.004460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.420 [2024-11-20 03:17:00.004535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.420 [2024-11-20 03:17:00.004578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.420 [2024-11-20 03:17:00.004588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.420 [2024-11-20 03:17:00.006798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.420 [2024-11-20 03:17:00.006838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.420 pt1 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.420 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.420 malloc2 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 [2024-11-20 03:17:00.059553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.680 [2024-11-20 03:17:00.059657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.680 [2024-11-20 03:17:00.059685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:10.680 [2024-11-20 03:17:00.059708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.680 [2024-11-20 03:17:00.061899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.680 [2024-11-20 03:17:00.061937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.680 pt2 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 malloc3 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 [2024-11-20 03:17:00.126357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:10.680 [2024-11-20 03:17:00.126423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.680 [2024-11-20 03:17:00.126470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:10.680 [2024-11-20 03:17:00.126480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.680 [2024-11-20 03:17:00.128647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.680 [2024-11-20 03:17:00.128686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:10.680 pt3 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 malloc4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 [2024-11-20 03:17:00.182243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:10.680 [2024-11-20 03:17:00.182300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.680 [2024-11-20 03:17:00.182319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:10.680 [2024-11-20 03:17:00.182328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.680 [2024-11-20 03:17:00.184417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.680 [2024-11-20 03:17:00.184452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:10.680 pt4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 [2024-11-20 03:17:00.194251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.680 [2024-11-20 03:17:00.196080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.680 [2024-11-20 03:17:00.196143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:10.680 [2024-11-20 03:17:00.196203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:10.680 [2024-11-20 03:17:00.196409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:10.680 [2024-11-20 03:17:00.196428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.680 [2024-11-20 03:17:00.196709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:10.680 [2024-11-20 03:17:00.196899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:10.680 [2024-11-20 03:17:00.196919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:10.680 [2024-11-20 03:17:00.197073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.680 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.680 "name": "raid_bdev1", 00:10:10.680 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:10.680 "strip_size_kb": 64, 00:10:10.680 "state": "online", 00:10:10.680 "raid_level": "raid0", 00:10:10.680 "superblock": true, 00:10:10.680 "num_base_bdevs": 4, 00:10:10.680 "num_base_bdevs_discovered": 4, 00:10:10.680 "num_base_bdevs_operational": 4, 00:10:10.680 "base_bdevs_list": [ 00:10:10.680 { 00:10:10.680 "name": "pt1", 00:10:10.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.680 "is_configured": true, 00:10:10.680 "data_offset": 2048, 00:10:10.680 "data_size": 63488 00:10:10.680 }, 00:10:10.680 { 00:10:10.680 "name": "pt2", 00:10:10.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.680 "is_configured": true, 00:10:10.680 "data_offset": 2048, 00:10:10.680 "data_size": 63488 00:10:10.680 }, 00:10:10.680 { 00:10:10.680 "name": "pt3", 00:10:10.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.680 "is_configured": true, 00:10:10.680 "data_offset": 2048, 00:10:10.680 "data_size": 63488 00:10:10.680 }, 00:10:10.680 { 00:10:10.680 "name": "pt4", 00:10:10.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:10.680 "is_configured": true, 00:10:10.681 "data_offset": 2048, 00:10:10.681 "data_size": 63488 00:10:10.681 } 00:10:10.681 ] 00:10:10.681 }' 00:10:10.681 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.681 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.248 [2024-11-20 03:17:00.653816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.248 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.248 "name": "raid_bdev1", 00:10:11.248 "aliases": [ 00:10:11.248 "39abb862-06b4-4f35-812b-c03be27ac308" 00:10:11.248 ], 00:10:11.249 "product_name": "Raid Volume", 00:10:11.249 "block_size": 512, 00:10:11.249 "num_blocks": 253952, 00:10:11.249 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:11.249 "assigned_rate_limits": { 00:10:11.249 "rw_ios_per_sec": 0, 00:10:11.249 "rw_mbytes_per_sec": 0, 00:10:11.249 "r_mbytes_per_sec": 0, 00:10:11.249 "w_mbytes_per_sec": 0 00:10:11.249 }, 00:10:11.249 "claimed": false, 00:10:11.249 "zoned": false, 00:10:11.249 "supported_io_types": { 00:10:11.249 "read": true, 00:10:11.249 "write": true, 00:10:11.249 "unmap": true, 00:10:11.249 "flush": true, 00:10:11.249 "reset": true, 00:10:11.249 "nvme_admin": false, 00:10:11.249 "nvme_io": false, 00:10:11.249 "nvme_io_md": false, 00:10:11.249 "write_zeroes": true, 00:10:11.249 "zcopy": false, 00:10:11.249 "get_zone_info": false, 00:10:11.249 "zone_management": false, 00:10:11.249 "zone_append": false, 00:10:11.249 "compare": false, 00:10:11.249 "compare_and_write": false, 00:10:11.249 "abort": false, 00:10:11.249 "seek_hole": false, 00:10:11.249 "seek_data": false, 00:10:11.249 "copy": false, 00:10:11.249 "nvme_iov_md": false 00:10:11.249 }, 00:10:11.249 "memory_domains": [ 00:10:11.249 { 00:10:11.249 "dma_device_id": "system", 00:10:11.249 "dma_device_type": 1 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.249 "dma_device_type": 2 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "system", 00:10:11.249 "dma_device_type": 1 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.249 "dma_device_type": 2 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "system", 00:10:11.249 "dma_device_type": 1 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.249 "dma_device_type": 2 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "system", 00:10:11.249 "dma_device_type": 1 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.249 "dma_device_type": 2 00:10:11.249 } 00:10:11.249 ], 00:10:11.249 "driver_specific": { 00:10:11.249 "raid": { 00:10:11.249 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:11.249 "strip_size_kb": 64, 00:10:11.249 "state": "online", 00:10:11.249 "raid_level": "raid0", 00:10:11.249 "superblock": true, 00:10:11.249 "num_base_bdevs": 4, 00:10:11.249 "num_base_bdevs_discovered": 4, 00:10:11.249 "num_base_bdevs_operational": 4, 00:10:11.249 "base_bdevs_list": [ 00:10:11.249 { 00:10:11.249 "name": "pt1", 00:10:11.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.249 "is_configured": true, 00:10:11.249 "data_offset": 2048, 00:10:11.249 "data_size": 63488 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "name": "pt2", 00:10:11.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.249 "is_configured": true, 00:10:11.249 "data_offset": 2048, 00:10:11.249 "data_size": 63488 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "name": "pt3", 00:10:11.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.249 "is_configured": true, 00:10:11.249 "data_offset": 2048, 00:10:11.249 "data_size": 63488 00:10:11.249 }, 00:10:11.249 { 00:10:11.249 "name": "pt4", 00:10:11.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.249 "is_configured": true, 00:10:11.249 "data_offset": 2048, 00:10:11.249 "data_size": 63488 00:10:11.249 } 00:10:11.249 ] 00:10:11.249 } 00:10:11.249 } 00:10:11.249 }' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.249 pt2 00:10:11.249 pt3 00:10:11.249 pt4' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.249 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 [2024-11-20 03:17:00.949255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=39abb862-06b4-4f35-812b-c03be27ac308 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 39abb862-06b4-4f35-812b-c03be27ac308 ']' 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 [2024-11-20 03:17:00.980896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.509 [2024-11-20 03:17:00.980928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.509 [2024-11-20 03:17:00.981020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.509 [2024-11-20 03:17:00.981089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.509 [2024-11-20 03:17:00.981105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.509 [2024-11-20 03:17:01.128661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.509 [2024-11-20 03:17:01.130580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.509 [2024-11-20 03:17:01.130655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:11.509 [2024-11-20 03:17:01.130693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:11.509 [2024-11-20 03:17:01.130751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.509 [2024-11-20 03:17:01.130809] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.509 [2024-11-20 03:17:01.130830] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:11.509 [2024-11-20 03:17:01.130851] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:11.509 [2024-11-20 03:17:01.130866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.509 [2024-11-20 03:17:01.130882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:11.509 request: 00:10:11.509 { 00:10:11.509 "name": "raid_bdev1", 00:10:11.509 "raid_level": "raid0", 00:10:11.509 "base_bdevs": [ 00:10:11.509 "malloc1", 00:10:11.509 "malloc2", 00:10:11.509 "malloc3", 00:10:11.509 "malloc4" 00:10:11.509 ], 00:10:11.509 "strip_size_kb": 64, 00:10:11.509 "superblock": false, 00:10:11.509 "method": "bdev_raid_create", 00:10:11.509 "req_id": 1 00:10:11.509 } 00:10:11.509 Got JSON-RPC error response 00:10:11.509 response: 00:10:11.509 { 00:10:11.509 "code": -17, 00:10:11.509 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.509 } 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.509 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.769 [2024-11-20 03:17:01.196516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.769 [2024-11-20 03:17:01.196667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.769 [2024-11-20 03:17:01.196709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:11.769 [2024-11-20 03:17:01.196746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.769 [2024-11-20 03:17:01.199068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.769 [2024-11-20 03:17:01.199168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.769 [2024-11-20 03:17:01.199287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.769 [2024-11-20 03:17:01.199392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.769 pt1 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.769 "name": "raid_bdev1", 00:10:11.769 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:11.769 "strip_size_kb": 64, 00:10:11.769 "state": "configuring", 00:10:11.769 "raid_level": "raid0", 00:10:11.769 "superblock": true, 00:10:11.769 "num_base_bdevs": 4, 00:10:11.769 "num_base_bdevs_discovered": 1, 00:10:11.769 "num_base_bdevs_operational": 4, 00:10:11.769 "base_bdevs_list": [ 00:10:11.769 { 00:10:11.769 "name": "pt1", 00:10:11.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.769 "is_configured": true, 00:10:11.769 "data_offset": 2048, 00:10:11.769 "data_size": 63488 00:10:11.769 }, 00:10:11.769 { 00:10:11.769 "name": null, 00:10:11.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.769 "is_configured": false, 00:10:11.769 "data_offset": 2048, 00:10:11.769 "data_size": 63488 00:10:11.769 }, 00:10:11.769 { 00:10:11.769 "name": null, 00:10:11.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.769 "is_configured": false, 00:10:11.769 "data_offset": 2048, 00:10:11.769 "data_size": 63488 00:10:11.769 }, 00:10:11.769 { 00:10:11.769 "name": null, 00:10:11.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.769 "is_configured": false, 00:10:11.769 "data_offset": 2048, 00:10:11.769 "data_size": 63488 00:10:11.769 } 00:10:11.769 ] 00:10:11.769 }' 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.769 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.028 [2024-11-20 03:17:01.627829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.028 [2024-11-20 03:17:01.627974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.028 [2024-11-20 03:17:01.627999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:12.028 [2024-11-20 03:17:01.628011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.028 [2024-11-20 03:17:01.628499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.028 [2024-11-20 03:17:01.628529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.028 [2024-11-20 03:17:01.628624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.028 [2024-11-20 03:17:01.628651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.028 pt2 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.028 [2024-11-20 03:17:01.639823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.028 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.029 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.288 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.288 "name": "raid_bdev1", 00:10:12.288 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:12.288 "strip_size_kb": 64, 00:10:12.288 "state": "configuring", 00:10:12.288 "raid_level": "raid0", 00:10:12.288 "superblock": true, 00:10:12.288 "num_base_bdevs": 4, 00:10:12.288 "num_base_bdevs_discovered": 1, 00:10:12.288 "num_base_bdevs_operational": 4, 00:10:12.288 "base_bdevs_list": [ 00:10:12.288 { 00:10:12.288 "name": "pt1", 00:10:12.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.288 "is_configured": true, 00:10:12.288 "data_offset": 2048, 00:10:12.288 "data_size": 63488 00:10:12.288 }, 00:10:12.288 { 00:10:12.288 "name": null, 00:10:12.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.288 "is_configured": false, 00:10:12.288 "data_offset": 0, 00:10:12.288 "data_size": 63488 00:10:12.288 }, 00:10:12.288 { 00:10:12.288 "name": null, 00:10:12.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.288 "is_configured": false, 00:10:12.288 "data_offset": 2048, 00:10:12.288 "data_size": 63488 00:10:12.288 }, 00:10:12.288 { 00:10:12.288 "name": null, 00:10:12.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.288 "is_configured": false, 00:10:12.288 "data_offset": 2048, 00:10:12.288 "data_size": 63488 00:10:12.288 } 00:10:12.288 ] 00:10:12.288 }' 00:10:12.288 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.288 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 [2024-11-20 03:17:02.067115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.547 [2024-11-20 03:17:02.067248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.547 [2024-11-20 03:17:02.067295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:12.547 [2024-11-20 03:17:02.067326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.547 [2024-11-20 03:17:02.067807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.547 [2024-11-20 03:17:02.067865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.547 [2024-11-20 03:17:02.067988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.547 [2024-11-20 03:17:02.068037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.547 pt2 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 [2024-11-20 03:17:02.079065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.547 [2024-11-20 03:17:02.079165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.547 [2024-11-20 03:17:02.079212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:12.547 [2024-11-20 03:17:02.079246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.547 [2024-11-20 03:17:02.079692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.547 [2024-11-20 03:17:02.079753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.547 [2024-11-20 03:17:02.079853] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.547 [2024-11-20 03:17:02.079898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.547 pt3 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 [2024-11-20 03:17:02.091018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:12.547 [2024-11-20 03:17:02.091109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.547 [2024-11-20 03:17:02.091136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:12.547 [2024-11-20 03:17:02.091144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.547 [2024-11-20 03:17:02.091542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.547 [2024-11-20 03:17:02.091558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:12.547 [2024-11-20 03:17:02.091651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:12.547 [2024-11-20 03:17:02.091672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:12.547 [2024-11-20 03:17:02.091810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.547 [2024-11-20 03:17:02.091819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:12.547 [2024-11-20 03:17:02.092058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:12.547 [2024-11-20 03:17:02.092213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.547 [2024-11-20 03:17:02.092225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:12.547 [2024-11-20 03:17:02.092356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.547 pt4 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.547 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.547 "name": "raid_bdev1", 00:10:12.547 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:12.547 "strip_size_kb": 64, 00:10:12.547 "state": "online", 00:10:12.547 "raid_level": "raid0", 00:10:12.547 "superblock": true, 00:10:12.547 "num_base_bdevs": 4, 00:10:12.547 "num_base_bdevs_discovered": 4, 00:10:12.547 "num_base_bdevs_operational": 4, 00:10:12.547 "base_bdevs_list": [ 00:10:12.547 { 00:10:12.547 "name": "pt1", 00:10:12.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.547 "is_configured": true, 00:10:12.547 "data_offset": 2048, 00:10:12.547 "data_size": 63488 00:10:12.547 }, 00:10:12.548 { 00:10:12.548 "name": "pt2", 00:10:12.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.548 "is_configured": true, 00:10:12.548 "data_offset": 2048, 00:10:12.548 "data_size": 63488 00:10:12.548 }, 00:10:12.548 { 00:10:12.548 "name": "pt3", 00:10:12.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.548 "is_configured": true, 00:10:12.548 "data_offset": 2048, 00:10:12.548 "data_size": 63488 00:10:12.548 }, 00:10:12.548 { 00:10:12.548 "name": "pt4", 00:10:12.548 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.548 "is_configured": true, 00:10:12.548 "data_offset": 2048, 00:10:12.548 "data_size": 63488 00:10:12.548 } 00:10:12.548 ] 00:10:12.548 }' 00:10:12.548 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.548 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.115 [2024-11-20 03:17:02.554717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.115 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.115 "name": "raid_bdev1", 00:10:13.115 "aliases": [ 00:10:13.115 "39abb862-06b4-4f35-812b-c03be27ac308" 00:10:13.115 ], 00:10:13.115 "product_name": "Raid Volume", 00:10:13.115 "block_size": 512, 00:10:13.115 "num_blocks": 253952, 00:10:13.115 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:13.115 "assigned_rate_limits": { 00:10:13.115 "rw_ios_per_sec": 0, 00:10:13.115 "rw_mbytes_per_sec": 0, 00:10:13.115 "r_mbytes_per_sec": 0, 00:10:13.115 "w_mbytes_per_sec": 0 00:10:13.115 }, 00:10:13.115 "claimed": false, 00:10:13.115 "zoned": false, 00:10:13.115 "supported_io_types": { 00:10:13.115 "read": true, 00:10:13.115 "write": true, 00:10:13.115 "unmap": true, 00:10:13.115 "flush": true, 00:10:13.115 "reset": true, 00:10:13.115 "nvme_admin": false, 00:10:13.115 "nvme_io": false, 00:10:13.115 "nvme_io_md": false, 00:10:13.115 "write_zeroes": true, 00:10:13.115 "zcopy": false, 00:10:13.115 "get_zone_info": false, 00:10:13.115 "zone_management": false, 00:10:13.115 "zone_append": false, 00:10:13.115 "compare": false, 00:10:13.115 "compare_and_write": false, 00:10:13.115 "abort": false, 00:10:13.115 "seek_hole": false, 00:10:13.115 "seek_data": false, 00:10:13.115 "copy": false, 00:10:13.115 "nvme_iov_md": false 00:10:13.115 }, 00:10:13.115 "memory_domains": [ 00:10:13.115 { 00:10:13.115 "dma_device_id": "system", 00:10:13.115 "dma_device_type": 1 00:10:13.115 }, 00:10:13.115 { 00:10:13.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.115 "dma_device_type": 2 00:10:13.115 }, 00:10:13.115 { 00:10:13.115 "dma_device_id": "system", 00:10:13.115 "dma_device_type": 1 00:10:13.115 }, 00:10:13.115 { 00:10:13.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.115 "dma_device_type": 2 00:10:13.115 }, 00:10:13.115 { 00:10:13.115 "dma_device_id": "system", 00:10:13.115 "dma_device_type": 1 00:10:13.115 }, 00:10:13.116 { 00:10:13.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.116 "dma_device_type": 2 00:10:13.116 }, 00:10:13.116 { 00:10:13.116 "dma_device_id": "system", 00:10:13.116 "dma_device_type": 1 00:10:13.116 }, 00:10:13.116 { 00:10:13.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.116 "dma_device_type": 2 00:10:13.116 } 00:10:13.116 ], 00:10:13.116 "driver_specific": { 00:10:13.116 "raid": { 00:10:13.116 "uuid": "39abb862-06b4-4f35-812b-c03be27ac308", 00:10:13.116 "strip_size_kb": 64, 00:10:13.116 "state": "online", 00:10:13.116 "raid_level": "raid0", 00:10:13.116 "superblock": true, 00:10:13.116 "num_base_bdevs": 4, 00:10:13.116 "num_base_bdevs_discovered": 4, 00:10:13.116 "num_base_bdevs_operational": 4, 00:10:13.116 "base_bdevs_list": [ 00:10:13.116 { 00:10:13.116 "name": "pt1", 00:10:13.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.116 "is_configured": true, 00:10:13.116 "data_offset": 2048, 00:10:13.116 "data_size": 63488 00:10:13.116 }, 00:10:13.116 { 00:10:13.116 "name": "pt2", 00:10:13.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.116 "is_configured": true, 00:10:13.116 "data_offset": 2048, 00:10:13.116 "data_size": 63488 00:10:13.116 }, 00:10:13.116 { 00:10:13.116 "name": "pt3", 00:10:13.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.116 "is_configured": true, 00:10:13.116 "data_offset": 2048, 00:10:13.116 "data_size": 63488 00:10:13.116 }, 00:10:13.116 { 00:10:13.116 "name": "pt4", 00:10:13.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.116 "is_configured": true, 00:10:13.116 "data_offset": 2048, 00:10:13.116 "data_size": 63488 00:10:13.116 } 00:10:13.116 ] 00:10:13.116 } 00:10:13.116 } 00:10:13.116 }' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.116 pt2 00:10:13.116 pt3 00:10:13.116 pt4' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.116 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:13.375 [2024-11-20 03:17:02.878085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 39abb862-06b4-4f35-812b-c03be27ac308 '!=' 39abb862-06b4-4f35-812b-c03be27ac308 ']' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70570 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70570 ']' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70570 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70570 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70570' 00:10:13.375 killing process with pid 70570 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70570 00:10:13.375 [2024-11-20 03:17:02.953863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.375 [2024-11-20 03:17:02.954027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.375 03:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70570 00:10:13.375 [2024-11-20 03:17:02.954105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.375 [2024-11-20 03:17:02.954115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:13.943 [2024-11-20 03:17:03.359759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.884 03:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.884 ************************************ 00:10:14.884 END TEST raid_superblock_test 00:10:14.884 ************************************ 00:10:14.884 00:10:14.884 real 0m5.444s 00:10:14.884 user 0m7.747s 00:10:14.884 sys 0m0.948s 00:10:14.884 03:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.884 03:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.150 03:17:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:15.150 03:17:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.150 03:17:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.150 03:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.150 ************************************ 00:10:15.150 START TEST raid_read_error_test 00:10:15.150 ************************************ 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LzXeUaYRAt 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70829 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70829 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70829 ']' 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.150 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.150 [2024-11-20 03:17:04.653739] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:15.150 [2024-11-20 03:17:04.653955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70829 ] 00:10:15.409 [2024-11-20 03:17:04.829599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.409 [2024-11-20 03:17:04.948086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.668 [2024-11-20 03:17:05.144258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.668 [2024-11-20 03:17:05.144329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.927 BaseBdev1_malloc 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.927 true 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.927 [2024-11-20 03:17:05.552044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.927 [2024-11-20 03:17:05.552104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.927 [2024-11-20 03:17:05.552139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.927 [2024-11-20 03:17:05.552151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.927 [2024-11-20 03:17:05.554226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.927 [2024-11-20 03:17:05.554265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.927 BaseBdev1 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 BaseBdev2_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 true 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 [2024-11-20 03:17:05.619795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.187 [2024-11-20 03:17:05.619875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.187 [2024-11-20 03:17:05.619895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.187 [2024-11-20 03:17:05.619908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.187 [2024-11-20 03:17:05.622207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.187 [2024-11-20 03:17:05.622249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.187 BaseBdev2 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 BaseBdev3_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 true 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 [2024-11-20 03:17:05.698665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.187 [2024-11-20 03:17:05.698720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.187 [2024-11-20 03:17:05.698739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.187 [2024-11-20 03:17:05.698749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.187 [2024-11-20 03:17:05.700822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.187 [2024-11-20 03:17:05.700860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.187 BaseBdev3 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.187 BaseBdev4_malloc 00:10:16.187 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.188 true 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.188 [2024-11-20 03:17:05.761218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:16.188 [2024-11-20 03:17:05.761303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.188 [2024-11-20 03:17:05.761333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:16.188 [2024-11-20 03:17:05.761350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.188 [2024-11-20 03:17:05.764181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.188 [2024-11-20 03:17:05.764260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:16.188 BaseBdev4 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.188 [2024-11-20 03:17:05.773291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.188 [2024-11-20 03:17:05.775568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.188 [2024-11-20 03:17:05.775709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.188 [2024-11-20 03:17:05.775810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.188 [2024-11-20 03:17:05.776110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:16.188 [2024-11-20 03:17:05.776142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.188 [2024-11-20 03:17:05.776462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:16.188 [2024-11-20 03:17:05.776704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:16.188 [2024-11-20 03:17:05.776734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:16.188 [2024-11-20 03:17:05.776942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.188 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.446 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.446 "name": "raid_bdev1", 00:10:16.446 "uuid": "840addb1-de2e-4f10-b10f-158a2a87dbe9", 00:10:16.446 "strip_size_kb": 64, 00:10:16.446 "state": "online", 00:10:16.446 "raid_level": "raid0", 00:10:16.446 "superblock": true, 00:10:16.447 "num_base_bdevs": 4, 00:10:16.447 "num_base_bdevs_discovered": 4, 00:10:16.447 "num_base_bdevs_operational": 4, 00:10:16.447 "base_bdevs_list": [ 00:10:16.447 { 00:10:16.447 "name": "BaseBdev1", 00:10:16.447 "uuid": "f7b6d272-5e17-50cb-a7c7-1c53331e145f", 00:10:16.447 "is_configured": true, 00:10:16.447 "data_offset": 2048, 00:10:16.447 "data_size": 63488 00:10:16.447 }, 00:10:16.447 { 00:10:16.447 "name": "BaseBdev2", 00:10:16.447 "uuid": "dc2ea383-1df0-5756-b3b0-4db21809e3e6", 00:10:16.447 "is_configured": true, 00:10:16.447 "data_offset": 2048, 00:10:16.447 "data_size": 63488 00:10:16.447 }, 00:10:16.447 { 00:10:16.447 "name": "BaseBdev3", 00:10:16.447 "uuid": "71f18afb-8a43-5481-9d3b-ef8105a5243e", 00:10:16.447 "is_configured": true, 00:10:16.447 "data_offset": 2048, 00:10:16.447 "data_size": 63488 00:10:16.447 }, 00:10:16.447 { 00:10:16.447 "name": "BaseBdev4", 00:10:16.447 "uuid": "67e6abc0-2f04-52ab-b97f-05172a5b2ca3", 00:10:16.447 "is_configured": true, 00:10:16.447 "data_offset": 2048, 00:10:16.447 "data_size": 63488 00:10:16.447 } 00:10:16.447 ] 00:10:16.447 }' 00:10:16.447 03:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.447 03:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.704 03:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.704 03:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:16.704 [2024-11-20 03:17:06.309681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.641 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.900 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.900 "name": "raid_bdev1", 00:10:17.900 "uuid": "840addb1-de2e-4f10-b10f-158a2a87dbe9", 00:10:17.900 "strip_size_kb": 64, 00:10:17.900 "state": "online", 00:10:17.900 "raid_level": "raid0", 00:10:17.900 "superblock": true, 00:10:17.900 "num_base_bdevs": 4, 00:10:17.900 "num_base_bdevs_discovered": 4, 00:10:17.900 "num_base_bdevs_operational": 4, 00:10:17.900 "base_bdevs_list": [ 00:10:17.900 { 00:10:17.900 "name": "BaseBdev1", 00:10:17.900 "uuid": "f7b6d272-5e17-50cb-a7c7-1c53331e145f", 00:10:17.900 "is_configured": true, 00:10:17.900 "data_offset": 2048, 00:10:17.900 "data_size": 63488 00:10:17.900 }, 00:10:17.900 { 00:10:17.900 "name": "BaseBdev2", 00:10:17.900 "uuid": "dc2ea383-1df0-5756-b3b0-4db21809e3e6", 00:10:17.900 "is_configured": true, 00:10:17.900 "data_offset": 2048, 00:10:17.900 "data_size": 63488 00:10:17.900 }, 00:10:17.900 { 00:10:17.900 "name": "BaseBdev3", 00:10:17.900 "uuid": "71f18afb-8a43-5481-9d3b-ef8105a5243e", 00:10:17.900 "is_configured": true, 00:10:17.900 "data_offset": 2048, 00:10:17.900 "data_size": 63488 00:10:17.900 }, 00:10:17.900 { 00:10:17.900 "name": "BaseBdev4", 00:10:17.900 "uuid": "67e6abc0-2f04-52ab-b97f-05172a5b2ca3", 00:10:17.900 "is_configured": true, 00:10:17.900 "data_offset": 2048, 00:10:17.900 "data_size": 63488 00:10:17.900 } 00:10:17.900 ] 00:10:17.900 }' 00:10:17.900 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.900 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.159 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.159 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.159 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.159 [2024-11-20 03:17:07.685721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.159 [2024-11-20 03:17:07.685762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.159 [2024-11-20 03:17:07.688749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.159 [2024-11-20 03:17:07.688817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.159 [2024-11-20 03:17:07.688866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.160 [2024-11-20 03:17:07.688879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:18.160 { 00:10:18.160 "results": [ 00:10:18.160 { 00:10:18.160 "job": "raid_bdev1", 00:10:18.160 "core_mask": "0x1", 00:10:18.160 "workload": "randrw", 00:10:18.160 "percentage": 50, 00:10:18.160 "status": "finished", 00:10:18.160 "queue_depth": 1, 00:10:18.160 "io_size": 131072, 00:10:18.160 "runtime": 1.376863, 00:10:18.160 "iops": 15132.95077288009, 00:10:18.160 "mibps": 1891.6188466100114, 00:10:18.160 "io_failed": 1, 00:10:18.160 "io_timeout": 0, 00:10:18.160 "avg_latency_us": 91.9489582794127, 00:10:18.160 "min_latency_us": 26.829694323144103, 00:10:18.160 "max_latency_us": 1452.380786026201 00:10:18.160 } 00:10:18.160 ], 00:10:18.160 "core_count": 1 00:10:18.160 } 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70829 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70829 ']' 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70829 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70829 00:10:18.160 killing process with pid 70829 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70829' 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70829 00:10:18.160 [2024-11-20 03:17:07.722413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.160 03:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70829 00:10:18.728 [2024-11-20 03:17:08.055589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LzXeUaYRAt 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:19.666 00:10:19.666 real 0m4.710s 00:10:19.666 user 0m5.555s 00:10:19.666 sys 0m0.572s 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.666 03:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.666 ************************************ 00:10:19.666 END TEST raid_read_error_test 00:10:19.666 ************************************ 00:10:19.924 03:17:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:19.924 03:17:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.924 03:17:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.924 03:17:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.924 ************************************ 00:10:19.924 START TEST raid_write_error_test 00:10:19.924 ************************************ 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JGefjzwhWa 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70981 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70981 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70981 ']' 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.924 03:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.924 [2024-11-20 03:17:09.434321] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:19.924 [2024-11-20 03:17:09.434442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70981 ] 00:10:20.183 [2024-11-20 03:17:09.606953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.183 [2024-11-20 03:17:09.720965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.442 [2024-11-20 03:17:09.923766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.443 [2024-11-20 03:17:09.923836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.702 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.702 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.702 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.702 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.702 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.702 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 BaseBdev1_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 true 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 [2024-11-20 03:17:10.357522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.962 [2024-11-20 03:17:10.357579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.962 [2024-11-20 03:17:10.357617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.962 [2024-11-20 03:17:10.357640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.962 [2024-11-20 03:17:10.359996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.962 [2024-11-20 03:17:10.360039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.962 BaseBdev1 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 BaseBdev2_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 true 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 [2024-11-20 03:17:10.425586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.962 [2024-11-20 03:17:10.425664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.962 [2024-11-20 03:17:10.425683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.962 [2024-11-20 03:17:10.425695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.962 [2024-11-20 03:17:10.428006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.962 [2024-11-20 03:17:10.428046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.962 BaseBdev2 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.962 BaseBdev3_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.962 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 true 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 [2024-11-20 03:17:10.504398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:20.963 [2024-11-20 03:17:10.504452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.963 [2024-11-20 03:17:10.504471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:20.963 [2024-11-20 03:17:10.504482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.963 [2024-11-20 03:17:10.506781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.963 [2024-11-20 03:17:10.506822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.963 BaseBdev3 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 BaseBdev4_malloc 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 true 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 [2024-11-20 03:17:10.559701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:20.963 [2024-11-20 03:17:10.559754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.963 [2024-11-20 03:17:10.559773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:20.963 [2024-11-20 03:17:10.559783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.963 [2024-11-20 03:17:10.561987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.963 [2024-11-20 03:17:10.562029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:20.963 BaseBdev4 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 [2024-11-20 03:17:10.571739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.963 [2024-11-20 03:17:10.573585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.963 [2024-11-20 03:17:10.573678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.963 [2024-11-20 03:17:10.573746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.963 [2024-11-20 03:17:10.573959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:20.963 [2024-11-20 03:17:10.573985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.963 [2024-11-20 03:17:10.574232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:20.963 [2024-11-20 03:17:10.574410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:20.963 [2024-11-20 03:17:10.574425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:20.963 [2024-11-20 03:17:10.574622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.963 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.222 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.222 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.222 "name": "raid_bdev1", 00:10:21.222 "uuid": "e2aec9f7-e6d4-4f84-b106-ed784145332d", 00:10:21.222 "strip_size_kb": 64, 00:10:21.222 "state": "online", 00:10:21.222 "raid_level": "raid0", 00:10:21.222 "superblock": true, 00:10:21.222 "num_base_bdevs": 4, 00:10:21.222 "num_base_bdevs_discovered": 4, 00:10:21.222 "num_base_bdevs_operational": 4, 00:10:21.222 "base_bdevs_list": [ 00:10:21.222 { 00:10:21.222 "name": "BaseBdev1", 00:10:21.222 "uuid": "84e06130-8370-5534-a509-ced2d8ff8187", 00:10:21.222 "is_configured": true, 00:10:21.222 "data_offset": 2048, 00:10:21.222 "data_size": 63488 00:10:21.222 }, 00:10:21.222 { 00:10:21.222 "name": "BaseBdev2", 00:10:21.222 "uuid": "3e2b9972-8928-530c-b03e-e339e854ad45", 00:10:21.222 "is_configured": true, 00:10:21.222 "data_offset": 2048, 00:10:21.222 "data_size": 63488 00:10:21.222 }, 00:10:21.222 { 00:10:21.222 "name": "BaseBdev3", 00:10:21.222 "uuid": "a00701b3-1594-5142-b8bf-979153b0eba4", 00:10:21.222 "is_configured": true, 00:10:21.222 "data_offset": 2048, 00:10:21.222 "data_size": 63488 00:10:21.222 }, 00:10:21.222 { 00:10:21.222 "name": "BaseBdev4", 00:10:21.222 "uuid": "c3d6a7e0-b3d5-55c4-ad11-94a74bb02260", 00:10:21.222 "is_configured": true, 00:10:21.223 "data_offset": 2048, 00:10:21.223 "data_size": 63488 00:10:21.223 } 00:10:21.223 ] 00:10:21.223 }' 00:10:21.223 03:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.223 03:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.481 03:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.481 03:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.740 [2024-11-20 03:17:11.120253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.677 "name": "raid_bdev1", 00:10:22.677 "uuid": "e2aec9f7-e6d4-4f84-b106-ed784145332d", 00:10:22.677 "strip_size_kb": 64, 00:10:22.677 "state": "online", 00:10:22.677 "raid_level": "raid0", 00:10:22.677 "superblock": true, 00:10:22.677 "num_base_bdevs": 4, 00:10:22.677 "num_base_bdevs_discovered": 4, 00:10:22.677 "num_base_bdevs_operational": 4, 00:10:22.677 "base_bdevs_list": [ 00:10:22.677 { 00:10:22.677 "name": "BaseBdev1", 00:10:22.677 "uuid": "84e06130-8370-5534-a509-ced2d8ff8187", 00:10:22.677 "is_configured": true, 00:10:22.677 "data_offset": 2048, 00:10:22.677 "data_size": 63488 00:10:22.677 }, 00:10:22.677 { 00:10:22.677 "name": "BaseBdev2", 00:10:22.677 "uuid": "3e2b9972-8928-530c-b03e-e339e854ad45", 00:10:22.677 "is_configured": true, 00:10:22.677 "data_offset": 2048, 00:10:22.677 "data_size": 63488 00:10:22.677 }, 00:10:22.677 { 00:10:22.677 "name": "BaseBdev3", 00:10:22.677 "uuid": "a00701b3-1594-5142-b8bf-979153b0eba4", 00:10:22.677 "is_configured": true, 00:10:22.677 "data_offset": 2048, 00:10:22.677 "data_size": 63488 00:10:22.677 }, 00:10:22.677 { 00:10:22.677 "name": "BaseBdev4", 00:10:22.677 "uuid": "c3d6a7e0-b3d5-55c4-ad11-94a74bb02260", 00:10:22.677 "is_configured": true, 00:10:22.677 "data_offset": 2048, 00:10:22.677 "data_size": 63488 00:10:22.677 } 00:10:22.677 ] 00:10:22.677 }' 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.677 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.936 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.936 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.936 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.936 [2024-11-20 03:17:12.520546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.936 [2024-11-20 03:17:12.520584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.936 [2024-11-20 03:17:12.523643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.936 [2024-11-20 03:17:12.523714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.936 [2024-11-20 03:17:12.523764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.936 [2024-11-20 03:17:12.523778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:22.937 { 00:10:22.937 "results": [ 00:10:22.937 { 00:10:22.937 "job": "raid_bdev1", 00:10:22.937 "core_mask": "0x1", 00:10:22.937 "workload": "randrw", 00:10:22.937 "percentage": 50, 00:10:22.937 "status": "finished", 00:10:22.937 "queue_depth": 1, 00:10:22.937 "io_size": 131072, 00:10:22.937 "runtime": 1.40121, 00:10:22.937 "iops": 15115.507311537885, 00:10:22.937 "mibps": 1889.4384139422357, 00:10:22.937 "io_failed": 1, 00:10:22.937 "io_timeout": 0, 00:10:22.937 "avg_latency_us": 91.96243960095242, 00:10:22.937 "min_latency_us": 26.829694323144103, 00:10:22.937 "max_latency_us": 1473.844541484716 00:10:22.937 } 00:10:22.937 ], 00:10:22.937 "core_count": 1 00:10:22.937 } 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70981 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70981 ']' 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70981 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70981 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70981' 00:10:22.937 killing process with pid 70981 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70981 00:10:22.937 [2024-11-20 03:17:12.565994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.937 03:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70981 00:10:23.503 [2024-11-20 03:17:12.901345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.879 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JGefjzwhWa 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:24.880 ************************************ 00:10:24.880 END TEST raid_write_error_test 00:10:24.880 ************************************ 00:10:24.880 00:10:24.880 real 0m4.774s 00:10:24.880 user 0m5.668s 00:10:24.880 sys 0m0.573s 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.880 03:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.880 03:17:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:24.880 03:17:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:24.880 03:17:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.880 03:17:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.880 03:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.880 ************************************ 00:10:24.880 START TEST raid_state_function_test 00:10:24.880 ************************************ 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71125 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71125' 00:10:24.880 Process raid pid: 71125 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71125 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71125 ']' 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.880 03:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.880 [2024-11-20 03:17:14.263018] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:24.880 [2024-11-20 03:17:14.263223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.880 [2024-11-20 03:17:14.441730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.139 [2024-11-20 03:17:14.553283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.139 [2024-11-20 03:17:14.765674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.139 [2024-11-20 03:17:14.765776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.706 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.706 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.706 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.706 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.706 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.706 [2024-11-20 03:17:15.106661] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.706 [2024-11-20 03:17:15.106788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.706 [2024-11-20 03:17:15.106824] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.706 [2024-11-20 03:17:15.106852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.706 [2024-11-20 03:17:15.106879] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.706 [2024-11-20 03:17:15.106913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.706 [2024-11-20 03:17:15.106931] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.706 [2024-11-20 03:17:15.106969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.706 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.707 "name": "Existed_Raid", 00:10:25.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.707 "strip_size_kb": 64, 00:10:25.707 "state": "configuring", 00:10:25.707 "raid_level": "concat", 00:10:25.707 "superblock": false, 00:10:25.707 "num_base_bdevs": 4, 00:10:25.707 "num_base_bdevs_discovered": 0, 00:10:25.707 "num_base_bdevs_operational": 4, 00:10:25.707 "base_bdevs_list": [ 00:10:25.707 { 00:10:25.707 "name": "BaseBdev1", 00:10:25.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.707 "is_configured": false, 00:10:25.707 "data_offset": 0, 00:10:25.707 "data_size": 0 00:10:25.707 }, 00:10:25.707 { 00:10:25.707 "name": "BaseBdev2", 00:10:25.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.707 "is_configured": false, 00:10:25.707 "data_offset": 0, 00:10:25.707 "data_size": 0 00:10:25.707 }, 00:10:25.707 { 00:10:25.707 "name": "BaseBdev3", 00:10:25.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.707 "is_configured": false, 00:10:25.707 "data_offset": 0, 00:10:25.707 "data_size": 0 00:10:25.707 }, 00:10:25.707 { 00:10:25.707 "name": "BaseBdev4", 00:10:25.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.707 "is_configured": false, 00:10:25.707 "data_offset": 0, 00:10:25.707 "data_size": 0 00:10:25.707 } 00:10:25.707 ] 00:10:25.707 }' 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.707 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.967 [2024-11-20 03:17:15.573791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.967 [2024-11-20 03:17:15.573832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.967 [2024-11-20 03:17:15.585769] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.967 [2024-11-20 03:17:15.585810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.967 [2024-11-20 03:17:15.585819] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.967 [2024-11-20 03:17:15.585828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.967 [2024-11-20 03:17:15.585835] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.967 [2024-11-20 03:17:15.585844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.967 [2024-11-20 03:17:15.585850] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.967 [2024-11-20 03:17:15.585859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.967 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.226 [2024-11-20 03:17:15.634229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.226 BaseBdev1 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.226 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.226 [ 00:10:26.226 { 00:10:26.226 "name": "BaseBdev1", 00:10:26.226 "aliases": [ 00:10:26.226 "a7a23d8a-af78-4078-acb7-f0e863c32b38" 00:10:26.226 ], 00:10:26.226 "product_name": "Malloc disk", 00:10:26.226 "block_size": 512, 00:10:26.226 "num_blocks": 65536, 00:10:26.226 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:26.226 "assigned_rate_limits": { 00:10:26.226 "rw_ios_per_sec": 0, 00:10:26.226 "rw_mbytes_per_sec": 0, 00:10:26.226 "r_mbytes_per_sec": 0, 00:10:26.227 "w_mbytes_per_sec": 0 00:10:26.227 }, 00:10:26.227 "claimed": true, 00:10:26.227 "claim_type": "exclusive_write", 00:10:26.227 "zoned": false, 00:10:26.227 "supported_io_types": { 00:10:26.227 "read": true, 00:10:26.227 "write": true, 00:10:26.227 "unmap": true, 00:10:26.227 "flush": true, 00:10:26.227 "reset": true, 00:10:26.227 "nvme_admin": false, 00:10:26.227 "nvme_io": false, 00:10:26.227 "nvme_io_md": false, 00:10:26.227 "write_zeroes": true, 00:10:26.227 "zcopy": true, 00:10:26.227 "get_zone_info": false, 00:10:26.227 "zone_management": false, 00:10:26.227 "zone_append": false, 00:10:26.227 "compare": false, 00:10:26.227 "compare_and_write": false, 00:10:26.227 "abort": true, 00:10:26.227 "seek_hole": false, 00:10:26.227 "seek_data": false, 00:10:26.227 "copy": true, 00:10:26.227 "nvme_iov_md": false 00:10:26.227 }, 00:10:26.227 "memory_domains": [ 00:10:26.227 { 00:10:26.227 "dma_device_id": "system", 00:10:26.227 "dma_device_type": 1 00:10:26.227 }, 00:10:26.227 { 00:10:26.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.227 "dma_device_type": 2 00:10:26.227 } 00:10:26.227 ], 00:10:26.227 "driver_specific": {} 00:10:26.227 } 00:10:26.227 ] 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.227 "name": "Existed_Raid", 00:10:26.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.227 "strip_size_kb": 64, 00:10:26.227 "state": "configuring", 00:10:26.227 "raid_level": "concat", 00:10:26.227 "superblock": false, 00:10:26.227 "num_base_bdevs": 4, 00:10:26.227 "num_base_bdevs_discovered": 1, 00:10:26.227 "num_base_bdevs_operational": 4, 00:10:26.227 "base_bdevs_list": [ 00:10:26.227 { 00:10:26.227 "name": "BaseBdev1", 00:10:26.227 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:26.227 "is_configured": true, 00:10:26.227 "data_offset": 0, 00:10:26.227 "data_size": 65536 00:10:26.227 }, 00:10:26.227 { 00:10:26.227 "name": "BaseBdev2", 00:10:26.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.227 "is_configured": false, 00:10:26.227 "data_offset": 0, 00:10:26.227 "data_size": 0 00:10:26.227 }, 00:10:26.227 { 00:10:26.227 "name": "BaseBdev3", 00:10:26.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.227 "is_configured": false, 00:10:26.227 "data_offset": 0, 00:10:26.227 "data_size": 0 00:10:26.227 }, 00:10:26.227 { 00:10:26.227 "name": "BaseBdev4", 00:10:26.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.227 "is_configured": false, 00:10:26.227 "data_offset": 0, 00:10:26.227 "data_size": 0 00:10:26.227 } 00:10:26.227 ] 00:10:26.227 }' 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.227 03:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.486 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.486 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.486 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.745 [2024-11-20 03:17:16.121489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.746 [2024-11-20 03:17:16.121551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.746 [2024-11-20 03:17:16.133513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.746 [2024-11-20 03:17:16.135507] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.746 [2024-11-20 03:17:16.135548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.746 [2024-11-20 03:17:16.135575] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.746 [2024-11-20 03:17:16.135587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.746 [2024-11-20 03:17:16.135594] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.746 [2024-11-20 03:17:16.135604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.746 "name": "Existed_Raid", 00:10:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.746 "strip_size_kb": 64, 00:10:26.746 "state": "configuring", 00:10:26.746 "raid_level": "concat", 00:10:26.746 "superblock": false, 00:10:26.746 "num_base_bdevs": 4, 00:10:26.746 "num_base_bdevs_discovered": 1, 00:10:26.746 "num_base_bdevs_operational": 4, 00:10:26.746 "base_bdevs_list": [ 00:10:26.746 { 00:10:26.746 "name": "BaseBdev1", 00:10:26.746 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:26.746 "is_configured": true, 00:10:26.746 "data_offset": 0, 00:10:26.746 "data_size": 65536 00:10:26.746 }, 00:10:26.746 { 00:10:26.746 "name": "BaseBdev2", 00:10:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.746 "is_configured": false, 00:10:26.746 "data_offset": 0, 00:10:26.746 "data_size": 0 00:10:26.746 }, 00:10:26.746 { 00:10:26.746 "name": "BaseBdev3", 00:10:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.746 "is_configured": false, 00:10:26.746 "data_offset": 0, 00:10:26.746 "data_size": 0 00:10:26.746 }, 00:10:26.746 { 00:10:26.746 "name": "BaseBdev4", 00:10:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.746 "is_configured": false, 00:10:26.746 "data_offset": 0, 00:10:26.746 "data_size": 0 00:10:26.746 } 00:10:26.746 ] 00:10:26.746 }' 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.746 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.005 [2024-11-20 03:17:16.631078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.005 BaseBdev2 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.005 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.263 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.263 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.263 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.264 [ 00:10:27.264 { 00:10:27.264 "name": "BaseBdev2", 00:10:27.264 "aliases": [ 00:10:27.264 "a3011967-315d-4856-9341-c4df85a2c3c1" 00:10:27.264 ], 00:10:27.264 "product_name": "Malloc disk", 00:10:27.264 "block_size": 512, 00:10:27.264 "num_blocks": 65536, 00:10:27.264 "uuid": "a3011967-315d-4856-9341-c4df85a2c3c1", 00:10:27.264 "assigned_rate_limits": { 00:10:27.264 "rw_ios_per_sec": 0, 00:10:27.264 "rw_mbytes_per_sec": 0, 00:10:27.264 "r_mbytes_per_sec": 0, 00:10:27.264 "w_mbytes_per_sec": 0 00:10:27.264 }, 00:10:27.264 "claimed": true, 00:10:27.264 "claim_type": "exclusive_write", 00:10:27.264 "zoned": false, 00:10:27.264 "supported_io_types": { 00:10:27.264 "read": true, 00:10:27.264 "write": true, 00:10:27.264 "unmap": true, 00:10:27.264 "flush": true, 00:10:27.264 "reset": true, 00:10:27.264 "nvme_admin": false, 00:10:27.264 "nvme_io": false, 00:10:27.264 "nvme_io_md": false, 00:10:27.264 "write_zeroes": true, 00:10:27.264 "zcopy": true, 00:10:27.264 "get_zone_info": false, 00:10:27.264 "zone_management": false, 00:10:27.264 "zone_append": false, 00:10:27.264 "compare": false, 00:10:27.264 "compare_and_write": false, 00:10:27.264 "abort": true, 00:10:27.264 "seek_hole": false, 00:10:27.264 "seek_data": false, 00:10:27.264 "copy": true, 00:10:27.264 "nvme_iov_md": false 00:10:27.264 }, 00:10:27.264 "memory_domains": [ 00:10:27.264 { 00:10:27.264 "dma_device_id": "system", 00:10:27.264 "dma_device_type": 1 00:10:27.264 }, 00:10:27.264 { 00:10:27.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.264 "dma_device_type": 2 00:10:27.264 } 00:10:27.264 ], 00:10:27.264 "driver_specific": {} 00:10:27.264 } 00:10:27.264 ] 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.264 "name": "Existed_Raid", 00:10:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.264 "strip_size_kb": 64, 00:10:27.264 "state": "configuring", 00:10:27.264 "raid_level": "concat", 00:10:27.264 "superblock": false, 00:10:27.264 "num_base_bdevs": 4, 00:10:27.264 "num_base_bdevs_discovered": 2, 00:10:27.264 "num_base_bdevs_operational": 4, 00:10:27.264 "base_bdevs_list": [ 00:10:27.264 { 00:10:27.264 "name": "BaseBdev1", 00:10:27.264 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:27.264 "is_configured": true, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 65536 00:10:27.264 }, 00:10:27.264 { 00:10:27.264 "name": "BaseBdev2", 00:10:27.264 "uuid": "a3011967-315d-4856-9341-c4df85a2c3c1", 00:10:27.264 "is_configured": true, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 65536 00:10:27.264 }, 00:10:27.264 { 00:10:27.264 "name": "BaseBdev3", 00:10:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.264 "is_configured": false, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 0 00:10:27.264 }, 00:10:27.264 { 00:10:27.264 "name": "BaseBdev4", 00:10:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.264 "is_configured": false, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 0 00:10:27.264 } 00:10:27.264 ] 00:10:27.264 }' 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.264 03:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.545 [2024-11-20 03:17:17.156810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.545 BaseBdev3 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.545 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.844 [ 00:10:27.844 { 00:10:27.844 "name": "BaseBdev3", 00:10:27.844 "aliases": [ 00:10:27.844 "22f65ab0-8bde-4d47-a037-8c7b1b03382a" 00:10:27.844 ], 00:10:27.844 "product_name": "Malloc disk", 00:10:27.844 "block_size": 512, 00:10:27.844 "num_blocks": 65536, 00:10:27.844 "uuid": "22f65ab0-8bde-4d47-a037-8c7b1b03382a", 00:10:27.844 "assigned_rate_limits": { 00:10:27.844 "rw_ios_per_sec": 0, 00:10:27.844 "rw_mbytes_per_sec": 0, 00:10:27.844 "r_mbytes_per_sec": 0, 00:10:27.844 "w_mbytes_per_sec": 0 00:10:27.844 }, 00:10:27.844 "claimed": true, 00:10:27.844 "claim_type": "exclusive_write", 00:10:27.844 "zoned": false, 00:10:27.844 "supported_io_types": { 00:10:27.844 "read": true, 00:10:27.844 "write": true, 00:10:27.844 "unmap": true, 00:10:27.844 "flush": true, 00:10:27.844 "reset": true, 00:10:27.844 "nvme_admin": false, 00:10:27.844 "nvme_io": false, 00:10:27.844 "nvme_io_md": false, 00:10:27.844 "write_zeroes": true, 00:10:27.844 "zcopy": true, 00:10:27.844 "get_zone_info": false, 00:10:27.844 "zone_management": false, 00:10:27.844 "zone_append": false, 00:10:27.844 "compare": false, 00:10:27.844 "compare_and_write": false, 00:10:27.844 "abort": true, 00:10:27.844 "seek_hole": false, 00:10:27.844 "seek_data": false, 00:10:27.844 "copy": true, 00:10:27.844 "nvme_iov_md": false 00:10:27.844 }, 00:10:27.844 "memory_domains": [ 00:10:27.844 { 00:10:27.844 "dma_device_id": "system", 00:10:27.844 "dma_device_type": 1 00:10:27.844 }, 00:10:27.844 { 00:10:27.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.844 "dma_device_type": 2 00:10:27.844 } 00:10:27.844 ], 00:10:27.844 "driver_specific": {} 00:10:27.844 } 00:10:27.844 ] 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.844 "name": "Existed_Raid", 00:10:27.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.844 "strip_size_kb": 64, 00:10:27.844 "state": "configuring", 00:10:27.844 "raid_level": "concat", 00:10:27.844 "superblock": false, 00:10:27.844 "num_base_bdevs": 4, 00:10:27.844 "num_base_bdevs_discovered": 3, 00:10:27.844 "num_base_bdevs_operational": 4, 00:10:27.844 "base_bdevs_list": [ 00:10:27.844 { 00:10:27.844 "name": "BaseBdev1", 00:10:27.844 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:27.844 "is_configured": true, 00:10:27.844 "data_offset": 0, 00:10:27.844 "data_size": 65536 00:10:27.844 }, 00:10:27.844 { 00:10:27.844 "name": "BaseBdev2", 00:10:27.844 "uuid": "a3011967-315d-4856-9341-c4df85a2c3c1", 00:10:27.844 "is_configured": true, 00:10:27.844 "data_offset": 0, 00:10:27.844 "data_size": 65536 00:10:27.844 }, 00:10:27.844 { 00:10:27.844 "name": "BaseBdev3", 00:10:27.844 "uuid": "22f65ab0-8bde-4d47-a037-8c7b1b03382a", 00:10:27.844 "is_configured": true, 00:10:27.844 "data_offset": 0, 00:10:27.844 "data_size": 65536 00:10:27.844 }, 00:10:27.844 { 00:10:27.844 "name": "BaseBdev4", 00:10:27.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.844 "is_configured": false, 00:10:27.844 "data_offset": 0, 00:10:27.844 "data_size": 0 00:10:27.844 } 00:10:27.844 ] 00:10:27.844 }' 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.844 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.108 [2024-11-20 03:17:17.703409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.108 [2024-11-20 03:17:17.703463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.108 [2024-11-20 03:17:17.703488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:28.108 [2024-11-20 03:17:17.703815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.108 [2024-11-20 03:17:17.704020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.108 [2024-11-20 03:17:17.704045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:28.108 [2024-11-20 03:17:17.704324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.108 BaseBdev4 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.108 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.108 [ 00:10:28.108 { 00:10:28.108 "name": "BaseBdev4", 00:10:28.108 "aliases": [ 00:10:28.108 "6f330a58-0cb2-4c7f-a52b-c33759bdc70b" 00:10:28.108 ], 00:10:28.108 "product_name": "Malloc disk", 00:10:28.108 "block_size": 512, 00:10:28.108 "num_blocks": 65536, 00:10:28.108 "uuid": "6f330a58-0cb2-4c7f-a52b-c33759bdc70b", 00:10:28.108 "assigned_rate_limits": { 00:10:28.108 "rw_ios_per_sec": 0, 00:10:28.108 "rw_mbytes_per_sec": 0, 00:10:28.108 "r_mbytes_per_sec": 0, 00:10:28.108 "w_mbytes_per_sec": 0 00:10:28.108 }, 00:10:28.108 "claimed": true, 00:10:28.108 "claim_type": "exclusive_write", 00:10:28.108 "zoned": false, 00:10:28.108 "supported_io_types": { 00:10:28.108 "read": true, 00:10:28.108 "write": true, 00:10:28.108 "unmap": true, 00:10:28.108 "flush": true, 00:10:28.108 "reset": true, 00:10:28.108 "nvme_admin": false, 00:10:28.108 "nvme_io": false, 00:10:28.108 "nvme_io_md": false, 00:10:28.108 "write_zeroes": true, 00:10:28.108 "zcopy": true, 00:10:28.108 "get_zone_info": false, 00:10:28.108 "zone_management": false, 00:10:28.108 "zone_append": false, 00:10:28.108 "compare": false, 00:10:28.108 "compare_and_write": false, 00:10:28.108 "abort": true, 00:10:28.108 "seek_hole": false, 00:10:28.108 "seek_data": false, 00:10:28.108 "copy": true, 00:10:28.108 "nvme_iov_md": false 00:10:28.108 }, 00:10:28.108 "memory_domains": [ 00:10:28.108 { 00:10:28.108 "dma_device_id": "system", 00:10:28.108 "dma_device_type": 1 00:10:28.108 }, 00:10:28.108 { 00:10:28.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.108 "dma_device_type": 2 00:10:28.108 } 00:10:28.108 ], 00:10:28.108 "driver_specific": {} 00:10:28.108 } 00:10:28.367 ] 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.367 "name": "Existed_Raid", 00:10:28.367 "uuid": "2524d79a-f738-4fab-8772-8facd5b63c88", 00:10:28.367 "strip_size_kb": 64, 00:10:28.367 "state": "online", 00:10:28.367 "raid_level": "concat", 00:10:28.367 "superblock": false, 00:10:28.367 "num_base_bdevs": 4, 00:10:28.367 "num_base_bdevs_discovered": 4, 00:10:28.367 "num_base_bdevs_operational": 4, 00:10:28.367 "base_bdevs_list": [ 00:10:28.367 { 00:10:28.367 "name": "BaseBdev1", 00:10:28.367 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:28.367 "is_configured": true, 00:10:28.367 "data_offset": 0, 00:10:28.367 "data_size": 65536 00:10:28.367 }, 00:10:28.367 { 00:10:28.367 "name": "BaseBdev2", 00:10:28.367 "uuid": "a3011967-315d-4856-9341-c4df85a2c3c1", 00:10:28.367 "is_configured": true, 00:10:28.367 "data_offset": 0, 00:10:28.367 "data_size": 65536 00:10:28.367 }, 00:10:28.367 { 00:10:28.367 "name": "BaseBdev3", 00:10:28.367 "uuid": "22f65ab0-8bde-4d47-a037-8c7b1b03382a", 00:10:28.367 "is_configured": true, 00:10:28.367 "data_offset": 0, 00:10:28.367 "data_size": 65536 00:10:28.367 }, 00:10:28.367 { 00:10:28.367 "name": "BaseBdev4", 00:10:28.367 "uuid": "6f330a58-0cb2-4c7f-a52b-c33759bdc70b", 00:10:28.367 "is_configured": true, 00:10:28.367 "data_offset": 0, 00:10:28.367 "data_size": 65536 00:10:28.367 } 00:10:28.367 ] 00:10:28.367 }' 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.367 03:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.627 [2024-11-20 03:17:18.163076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.627 "name": "Existed_Raid", 00:10:28.627 "aliases": [ 00:10:28.627 "2524d79a-f738-4fab-8772-8facd5b63c88" 00:10:28.627 ], 00:10:28.627 "product_name": "Raid Volume", 00:10:28.627 "block_size": 512, 00:10:28.627 "num_blocks": 262144, 00:10:28.627 "uuid": "2524d79a-f738-4fab-8772-8facd5b63c88", 00:10:28.627 "assigned_rate_limits": { 00:10:28.627 "rw_ios_per_sec": 0, 00:10:28.627 "rw_mbytes_per_sec": 0, 00:10:28.627 "r_mbytes_per_sec": 0, 00:10:28.627 "w_mbytes_per_sec": 0 00:10:28.627 }, 00:10:28.627 "claimed": false, 00:10:28.627 "zoned": false, 00:10:28.627 "supported_io_types": { 00:10:28.627 "read": true, 00:10:28.627 "write": true, 00:10:28.627 "unmap": true, 00:10:28.627 "flush": true, 00:10:28.627 "reset": true, 00:10:28.627 "nvme_admin": false, 00:10:28.627 "nvme_io": false, 00:10:28.627 "nvme_io_md": false, 00:10:28.627 "write_zeroes": true, 00:10:28.627 "zcopy": false, 00:10:28.627 "get_zone_info": false, 00:10:28.627 "zone_management": false, 00:10:28.627 "zone_append": false, 00:10:28.627 "compare": false, 00:10:28.627 "compare_and_write": false, 00:10:28.627 "abort": false, 00:10:28.627 "seek_hole": false, 00:10:28.627 "seek_data": false, 00:10:28.627 "copy": false, 00:10:28.627 "nvme_iov_md": false 00:10:28.627 }, 00:10:28.627 "memory_domains": [ 00:10:28.627 { 00:10:28.627 "dma_device_id": "system", 00:10:28.627 "dma_device_type": 1 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.627 "dma_device_type": 2 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "system", 00:10:28.627 "dma_device_type": 1 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.627 "dma_device_type": 2 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "system", 00:10:28.627 "dma_device_type": 1 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.627 "dma_device_type": 2 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "system", 00:10:28.627 "dma_device_type": 1 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.627 "dma_device_type": 2 00:10:28.627 } 00:10:28.627 ], 00:10:28.627 "driver_specific": { 00:10:28.627 "raid": { 00:10:28.627 "uuid": "2524d79a-f738-4fab-8772-8facd5b63c88", 00:10:28.627 "strip_size_kb": 64, 00:10:28.627 "state": "online", 00:10:28.627 "raid_level": "concat", 00:10:28.627 "superblock": false, 00:10:28.627 "num_base_bdevs": 4, 00:10:28.627 "num_base_bdevs_discovered": 4, 00:10:28.627 "num_base_bdevs_operational": 4, 00:10:28.627 "base_bdevs_list": [ 00:10:28.627 { 00:10:28.627 "name": "BaseBdev1", 00:10:28.627 "uuid": "a7a23d8a-af78-4078-acb7-f0e863c32b38", 00:10:28.627 "is_configured": true, 00:10:28.627 "data_offset": 0, 00:10:28.627 "data_size": 65536 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "name": "BaseBdev2", 00:10:28.627 "uuid": "a3011967-315d-4856-9341-c4df85a2c3c1", 00:10:28.627 "is_configured": true, 00:10:28.627 "data_offset": 0, 00:10:28.627 "data_size": 65536 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "name": "BaseBdev3", 00:10:28.627 "uuid": "22f65ab0-8bde-4d47-a037-8c7b1b03382a", 00:10:28.627 "is_configured": true, 00:10:28.627 "data_offset": 0, 00:10:28.627 "data_size": 65536 00:10:28.627 }, 00:10:28.627 { 00:10:28.627 "name": "BaseBdev4", 00:10:28.627 "uuid": "6f330a58-0cb2-4c7f-a52b-c33759bdc70b", 00:10:28.627 "is_configured": true, 00:10:28.627 "data_offset": 0, 00:10:28.627 "data_size": 65536 00:10:28.627 } 00:10:28.627 ] 00:10:28.627 } 00:10:28.627 } 00:10:28.627 }' 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.627 BaseBdev2 00:10:28.627 BaseBdev3 00:10:28.627 BaseBdev4' 00:10:28.627 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.886 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.886 [2024-11-20 03:17:18.486224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.886 [2024-11-20 03:17:18.486259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.886 [2024-11-20 03:17:18.486310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.145 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.146 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.146 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.146 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.146 "name": "Existed_Raid", 00:10:29.146 "uuid": "2524d79a-f738-4fab-8772-8facd5b63c88", 00:10:29.146 "strip_size_kb": 64, 00:10:29.146 "state": "offline", 00:10:29.146 "raid_level": "concat", 00:10:29.146 "superblock": false, 00:10:29.146 "num_base_bdevs": 4, 00:10:29.146 "num_base_bdevs_discovered": 3, 00:10:29.146 "num_base_bdevs_operational": 3, 00:10:29.146 "base_bdevs_list": [ 00:10:29.146 { 00:10:29.146 "name": null, 00:10:29.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.146 "is_configured": false, 00:10:29.146 "data_offset": 0, 00:10:29.146 "data_size": 65536 00:10:29.146 }, 00:10:29.146 { 00:10:29.146 "name": "BaseBdev2", 00:10:29.146 "uuid": "a3011967-315d-4856-9341-c4df85a2c3c1", 00:10:29.146 "is_configured": true, 00:10:29.146 "data_offset": 0, 00:10:29.146 "data_size": 65536 00:10:29.146 }, 00:10:29.146 { 00:10:29.146 "name": "BaseBdev3", 00:10:29.146 "uuid": "22f65ab0-8bde-4d47-a037-8c7b1b03382a", 00:10:29.146 "is_configured": true, 00:10:29.146 "data_offset": 0, 00:10:29.146 "data_size": 65536 00:10:29.146 }, 00:10:29.146 { 00:10:29.146 "name": "BaseBdev4", 00:10:29.146 "uuid": "6f330a58-0cb2-4c7f-a52b-c33759bdc70b", 00:10:29.146 "is_configured": true, 00:10:29.146 "data_offset": 0, 00:10:29.146 "data_size": 65536 00:10:29.146 } 00:10:29.146 ] 00:10:29.146 }' 00:10:29.146 03:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.146 03:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.404 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.671 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.671 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.672 [2024-11-20 03:17:19.046323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.672 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.672 [2024-11-20 03:17:19.206674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.932 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 [2024-11-20 03:17:19.361035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:29.933 [2024-11-20 03:17:19.361091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 BaseBdev2 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.933 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.192 [ 00:10:30.192 { 00:10:30.192 "name": "BaseBdev2", 00:10:30.192 "aliases": [ 00:10:30.192 "46079c0b-9c4c-4fdf-9f39-001b3032e3b4" 00:10:30.192 ], 00:10:30.192 "product_name": "Malloc disk", 00:10:30.192 "block_size": 512, 00:10:30.192 "num_blocks": 65536, 00:10:30.192 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:30.192 "assigned_rate_limits": { 00:10:30.192 "rw_ios_per_sec": 0, 00:10:30.192 "rw_mbytes_per_sec": 0, 00:10:30.192 "r_mbytes_per_sec": 0, 00:10:30.192 "w_mbytes_per_sec": 0 00:10:30.192 }, 00:10:30.192 "claimed": false, 00:10:30.192 "zoned": false, 00:10:30.192 "supported_io_types": { 00:10:30.192 "read": true, 00:10:30.192 "write": true, 00:10:30.192 "unmap": true, 00:10:30.192 "flush": true, 00:10:30.192 "reset": true, 00:10:30.192 "nvme_admin": false, 00:10:30.192 "nvme_io": false, 00:10:30.192 "nvme_io_md": false, 00:10:30.192 "write_zeroes": true, 00:10:30.192 "zcopy": true, 00:10:30.192 "get_zone_info": false, 00:10:30.192 "zone_management": false, 00:10:30.192 "zone_append": false, 00:10:30.192 "compare": false, 00:10:30.192 "compare_and_write": false, 00:10:30.192 "abort": true, 00:10:30.192 "seek_hole": false, 00:10:30.192 "seek_data": false, 00:10:30.192 "copy": true, 00:10:30.192 "nvme_iov_md": false 00:10:30.192 }, 00:10:30.192 "memory_domains": [ 00:10:30.192 { 00:10:30.192 "dma_device_id": "system", 00:10:30.192 "dma_device_type": 1 00:10:30.192 }, 00:10:30.192 { 00:10:30.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.192 "dma_device_type": 2 00:10:30.192 } 00:10:30.192 ], 00:10:30.192 "driver_specific": {} 00:10:30.192 } 00:10:30.192 ] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.192 BaseBdev3 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.192 [ 00:10:30.192 { 00:10:30.192 "name": "BaseBdev3", 00:10:30.192 "aliases": [ 00:10:30.192 "91561f8a-6657-4a7b-8192-74792f6eac5f" 00:10:30.192 ], 00:10:30.192 "product_name": "Malloc disk", 00:10:30.192 "block_size": 512, 00:10:30.192 "num_blocks": 65536, 00:10:30.192 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:30.192 "assigned_rate_limits": { 00:10:30.192 "rw_ios_per_sec": 0, 00:10:30.192 "rw_mbytes_per_sec": 0, 00:10:30.192 "r_mbytes_per_sec": 0, 00:10:30.192 "w_mbytes_per_sec": 0 00:10:30.192 }, 00:10:30.192 "claimed": false, 00:10:30.192 "zoned": false, 00:10:30.192 "supported_io_types": { 00:10:30.192 "read": true, 00:10:30.192 "write": true, 00:10:30.192 "unmap": true, 00:10:30.192 "flush": true, 00:10:30.192 "reset": true, 00:10:30.192 "nvme_admin": false, 00:10:30.192 "nvme_io": false, 00:10:30.192 "nvme_io_md": false, 00:10:30.192 "write_zeroes": true, 00:10:30.192 "zcopy": true, 00:10:30.192 "get_zone_info": false, 00:10:30.192 "zone_management": false, 00:10:30.192 "zone_append": false, 00:10:30.192 "compare": false, 00:10:30.192 "compare_and_write": false, 00:10:30.192 "abort": true, 00:10:30.192 "seek_hole": false, 00:10:30.192 "seek_data": false, 00:10:30.192 "copy": true, 00:10:30.192 "nvme_iov_md": false 00:10:30.192 }, 00:10:30.192 "memory_domains": [ 00:10:30.192 { 00:10:30.192 "dma_device_id": "system", 00:10:30.192 "dma_device_type": 1 00:10:30.192 }, 00:10:30.192 { 00:10:30.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.192 "dma_device_type": 2 00:10:30.192 } 00:10:30.192 ], 00:10:30.192 "driver_specific": {} 00:10:30.192 } 00:10:30.192 ] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.192 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.192 BaseBdev4 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.193 [ 00:10:30.193 { 00:10:30.193 "name": "BaseBdev4", 00:10:30.193 "aliases": [ 00:10:30.193 "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e" 00:10:30.193 ], 00:10:30.193 "product_name": "Malloc disk", 00:10:30.193 "block_size": 512, 00:10:30.193 "num_blocks": 65536, 00:10:30.193 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:30.193 "assigned_rate_limits": { 00:10:30.193 "rw_ios_per_sec": 0, 00:10:30.193 "rw_mbytes_per_sec": 0, 00:10:30.193 "r_mbytes_per_sec": 0, 00:10:30.193 "w_mbytes_per_sec": 0 00:10:30.193 }, 00:10:30.193 "claimed": false, 00:10:30.193 "zoned": false, 00:10:30.193 "supported_io_types": { 00:10:30.193 "read": true, 00:10:30.193 "write": true, 00:10:30.193 "unmap": true, 00:10:30.193 "flush": true, 00:10:30.193 "reset": true, 00:10:30.193 "nvme_admin": false, 00:10:30.193 "nvme_io": false, 00:10:30.193 "nvme_io_md": false, 00:10:30.193 "write_zeroes": true, 00:10:30.193 "zcopy": true, 00:10:30.193 "get_zone_info": false, 00:10:30.193 "zone_management": false, 00:10:30.193 "zone_append": false, 00:10:30.193 "compare": false, 00:10:30.193 "compare_and_write": false, 00:10:30.193 "abort": true, 00:10:30.193 "seek_hole": false, 00:10:30.193 "seek_data": false, 00:10:30.193 "copy": true, 00:10:30.193 "nvme_iov_md": false 00:10:30.193 }, 00:10:30.193 "memory_domains": [ 00:10:30.193 { 00:10:30.193 "dma_device_id": "system", 00:10:30.193 "dma_device_type": 1 00:10:30.193 }, 00:10:30.193 { 00:10:30.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.193 "dma_device_type": 2 00:10:30.193 } 00:10:30.193 ], 00:10:30.193 "driver_specific": {} 00:10:30.193 } 00:10:30.193 ] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.193 [2024-11-20 03:17:19.756196] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.193 [2024-11-20 03:17:19.756240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.193 [2024-11-20 03:17:19.756263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.193 [2024-11-20 03:17:19.758193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.193 [2024-11-20 03:17:19.758262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.193 "name": "Existed_Raid", 00:10:30.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.193 "strip_size_kb": 64, 00:10:30.193 "state": "configuring", 00:10:30.193 "raid_level": "concat", 00:10:30.193 "superblock": false, 00:10:30.193 "num_base_bdevs": 4, 00:10:30.193 "num_base_bdevs_discovered": 3, 00:10:30.193 "num_base_bdevs_operational": 4, 00:10:30.193 "base_bdevs_list": [ 00:10:30.193 { 00:10:30.193 "name": "BaseBdev1", 00:10:30.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.193 "is_configured": false, 00:10:30.193 "data_offset": 0, 00:10:30.193 "data_size": 0 00:10:30.193 }, 00:10:30.193 { 00:10:30.193 "name": "BaseBdev2", 00:10:30.193 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:30.193 "is_configured": true, 00:10:30.193 "data_offset": 0, 00:10:30.193 "data_size": 65536 00:10:30.193 }, 00:10:30.193 { 00:10:30.193 "name": "BaseBdev3", 00:10:30.193 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:30.193 "is_configured": true, 00:10:30.193 "data_offset": 0, 00:10:30.193 "data_size": 65536 00:10:30.193 }, 00:10:30.193 { 00:10:30.193 "name": "BaseBdev4", 00:10:30.193 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:30.193 "is_configured": true, 00:10:30.193 "data_offset": 0, 00:10:30.193 "data_size": 65536 00:10:30.193 } 00:10:30.193 ] 00:10:30.193 }' 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.193 03:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.760 [2024-11-20 03:17:20.207491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.760 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.760 "name": "Existed_Raid", 00:10:30.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.760 "strip_size_kb": 64, 00:10:30.760 "state": "configuring", 00:10:30.760 "raid_level": "concat", 00:10:30.760 "superblock": false, 00:10:30.761 "num_base_bdevs": 4, 00:10:30.761 "num_base_bdevs_discovered": 2, 00:10:30.761 "num_base_bdevs_operational": 4, 00:10:30.761 "base_bdevs_list": [ 00:10:30.761 { 00:10:30.761 "name": "BaseBdev1", 00:10:30.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.761 "is_configured": false, 00:10:30.761 "data_offset": 0, 00:10:30.761 "data_size": 0 00:10:30.761 }, 00:10:30.761 { 00:10:30.761 "name": null, 00:10:30.761 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:30.761 "is_configured": false, 00:10:30.761 "data_offset": 0, 00:10:30.761 "data_size": 65536 00:10:30.761 }, 00:10:30.761 { 00:10:30.761 "name": "BaseBdev3", 00:10:30.761 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:30.761 "is_configured": true, 00:10:30.761 "data_offset": 0, 00:10:30.761 "data_size": 65536 00:10:30.761 }, 00:10:30.761 { 00:10:30.761 "name": "BaseBdev4", 00:10:30.761 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:30.761 "is_configured": true, 00:10:30.761 "data_offset": 0, 00:10:30.761 "data_size": 65536 00:10:30.761 } 00:10:30.761 ] 00:10:30.761 }' 00:10:30.761 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.761 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.328 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.328 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.329 [2024-11-20 03:17:20.748060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.329 BaseBdev1 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.329 [ 00:10:31.329 { 00:10:31.329 "name": "BaseBdev1", 00:10:31.329 "aliases": [ 00:10:31.329 "891dea76-24f2-4b94-bd69-61d3e8af2bf4" 00:10:31.329 ], 00:10:31.329 "product_name": "Malloc disk", 00:10:31.329 "block_size": 512, 00:10:31.329 "num_blocks": 65536, 00:10:31.329 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:31.329 "assigned_rate_limits": { 00:10:31.329 "rw_ios_per_sec": 0, 00:10:31.329 "rw_mbytes_per_sec": 0, 00:10:31.329 "r_mbytes_per_sec": 0, 00:10:31.329 "w_mbytes_per_sec": 0 00:10:31.329 }, 00:10:31.329 "claimed": true, 00:10:31.329 "claim_type": "exclusive_write", 00:10:31.329 "zoned": false, 00:10:31.329 "supported_io_types": { 00:10:31.329 "read": true, 00:10:31.329 "write": true, 00:10:31.329 "unmap": true, 00:10:31.329 "flush": true, 00:10:31.329 "reset": true, 00:10:31.329 "nvme_admin": false, 00:10:31.329 "nvme_io": false, 00:10:31.329 "nvme_io_md": false, 00:10:31.329 "write_zeroes": true, 00:10:31.329 "zcopy": true, 00:10:31.329 "get_zone_info": false, 00:10:31.329 "zone_management": false, 00:10:31.329 "zone_append": false, 00:10:31.329 "compare": false, 00:10:31.329 "compare_and_write": false, 00:10:31.329 "abort": true, 00:10:31.329 "seek_hole": false, 00:10:31.329 "seek_data": false, 00:10:31.329 "copy": true, 00:10:31.329 "nvme_iov_md": false 00:10:31.329 }, 00:10:31.329 "memory_domains": [ 00:10:31.329 { 00:10:31.329 "dma_device_id": "system", 00:10:31.329 "dma_device_type": 1 00:10:31.329 }, 00:10:31.329 { 00:10:31.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.329 "dma_device_type": 2 00:10:31.329 } 00:10:31.329 ], 00:10:31.329 "driver_specific": {} 00:10:31.329 } 00:10:31.329 ] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.329 "name": "Existed_Raid", 00:10:31.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.329 "strip_size_kb": 64, 00:10:31.329 "state": "configuring", 00:10:31.329 "raid_level": "concat", 00:10:31.329 "superblock": false, 00:10:31.329 "num_base_bdevs": 4, 00:10:31.329 "num_base_bdevs_discovered": 3, 00:10:31.329 "num_base_bdevs_operational": 4, 00:10:31.329 "base_bdevs_list": [ 00:10:31.329 { 00:10:31.329 "name": "BaseBdev1", 00:10:31.329 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:31.329 "is_configured": true, 00:10:31.329 "data_offset": 0, 00:10:31.329 "data_size": 65536 00:10:31.329 }, 00:10:31.329 { 00:10:31.329 "name": null, 00:10:31.329 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:31.329 "is_configured": false, 00:10:31.329 "data_offset": 0, 00:10:31.329 "data_size": 65536 00:10:31.329 }, 00:10:31.329 { 00:10:31.329 "name": "BaseBdev3", 00:10:31.329 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:31.329 "is_configured": true, 00:10:31.329 "data_offset": 0, 00:10:31.329 "data_size": 65536 00:10:31.329 }, 00:10:31.329 { 00:10:31.329 "name": "BaseBdev4", 00:10:31.329 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:31.329 "is_configured": true, 00:10:31.329 "data_offset": 0, 00:10:31.329 "data_size": 65536 00:10:31.329 } 00:10:31.329 ] 00:10:31.329 }' 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.329 03:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.897 [2024-11-20 03:17:21.303206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.897 "name": "Existed_Raid", 00:10:31.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.897 "strip_size_kb": 64, 00:10:31.897 "state": "configuring", 00:10:31.897 "raid_level": "concat", 00:10:31.897 "superblock": false, 00:10:31.897 "num_base_bdevs": 4, 00:10:31.897 "num_base_bdevs_discovered": 2, 00:10:31.897 "num_base_bdevs_operational": 4, 00:10:31.897 "base_bdevs_list": [ 00:10:31.897 { 00:10:31.897 "name": "BaseBdev1", 00:10:31.897 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:31.897 "is_configured": true, 00:10:31.897 "data_offset": 0, 00:10:31.897 "data_size": 65536 00:10:31.897 }, 00:10:31.897 { 00:10:31.897 "name": null, 00:10:31.897 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:31.897 "is_configured": false, 00:10:31.897 "data_offset": 0, 00:10:31.897 "data_size": 65536 00:10:31.897 }, 00:10:31.897 { 00:10:31.897 "name": null, 00:10:31.897 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:31.897 "is_configured": false, 00:10:31.897 "data_offset": 0, 00:10:31.897 "data_size": 65536 00:10:31.897 }, 00:10:31.897 { 00:10:31.897 "name": "BaseBdev4", 00:10:31.897 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:31.897 "is_configured": true, 00:10:31.897 "data_offset": 0, 00:10:31.897 "data_size": 65536 00:10:31.897 } 00:10:31.897 ] 00:10:31.897 }' 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.897 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.157 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.157 [2024-11-20 03:17:21.786407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.416 "name": "Existed_Raid", 00:10:32.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.416 "strip_size_kb": 64, 00:10:32.416 "state": "configuring", 00:10:32.416 "raid_level": "concat", 00:10:32.416 "superblock": false, 00:10:32.416 "num_base_bdevs": 4, 00:10:32.416 "num_base_bdevs_discovered": 3, 00:10:32.416 "num_base_bdevs_operational": 4, 00:10:32.416 "base_bdevs_list": [ 00:10:32.416 { 00:10:32.416 "name": "BaseBdev1", 00:10:32.416 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:32.416 "is_configured": true, 00:10:32.416 "data_offset": 0, 00:10:32.416 "data_size": 65536 00:10:32.416 }, 00:10:32.416 { 00:10:32.416 "name": null, 00:10:32.416 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:32.416 "is_configured": false, 00:10:32.416 "data_offset": 0, 00:10:32.416 "data_size": 65536 00:10:32.416 }, 00:10:32.416 { 00:10:32.416 "name": "BaseBdev3", 00:10:32.416 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:32.416 "is_configured": true, 00:10:32.416 "data_offset": 0, 00:10:32.416 "data_size": 65536 00:10:32.416 }, 00:10:32.416 { 00:10:32.416 "name": "BaseBdev4", 00:10:32.416 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:32.416 "is_configured": true, 00:10:32.416 "data_offset": 0, 00:10:32.416 "data_size": 65536 00:10:32.416 } 00:10:32.416 ] 00:10:32.416 }' 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.416 03:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.674 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.674 [2024-11-20 03:17:22.277590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.933 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.933 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.933 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.933 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.934 "name": "Existed_Raid", 00:10:32.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.934 "strip_size_kb": 64, 00:10:32.934 "state": "configuring", 00:10:32.934 "raid_level": "concat", 00:10:32.934 "superblock": false, 00:10:32.934 "num_base_bdevs": 4, 00:10:32.934 "num_base_bdevs_discovered": 2, 00:10:32.934 "num_base_bdevs_operational": 4, 00:10:32.934 "base_bdevs_list": [ 00:10:32.934 { 00:10:32.934 "name": null, 00:10:32.934 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:32.934 "is_configured": false, 00:10:32.934 "data_offset": 0, 00:10:32.934 "data_size": 65536 00:10:32.934 }, 00:10:32.934 { 00:10:32.934 "name": null, 00:10:32.934 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:32.934 "is_configured": false, 00:10:32.934 "data_offset": 0, 00:10:32.934 "data_size": 65536 00:10:32.934 }, 00:10:32.934 { 00:10:32.934 "name": "BaseBdev3", 00:10:32.934 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:32.934 "is_configured": true, 00:10:32.934 "data_offset": 0, 00:10:32.934 "data_size": 65536 00:10:32.934 }, 00:10:32.934 { 00:10:32.934 "name": "BaseBdev4", 00:10:32.934 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:32.934 "is_configured": true, 00:10:32.934 "data_offset": 0, 00:10:32.934 "data_size": 65536 00:10:32.934 } 00:10:32.934 ] 00:10:32.934 }' 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.934 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.502 [2024-11-20 03:17:22.876247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.502 "name": "Existed_Raid", 00:10:33.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.502 "strip_size_kb": 64, 00:10:33.502 "state": "configuring", 00:10:33.502 "raid_level": "concat", 00:10:33.502 "superblock": false, 00:10:33.502 "num_base_bdevs": 4, 00:10:33.502 "num_base_bdevs_discovered": 3, 00:10:33.502 "num_base_bdevs_operational": 4, 00:10:33.502 "base_bdevs_list": [ 00:10:33.502 { 00:10:33.502 "name": null, 00:10:33.502 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:33.502 "is_configured": false, 00:10:33.502 "data_offset": 0, 00:10:33.502 "data_size": 65536 00:10:33.502 }, 00:10:33.502 { 00:10:33.502 "name": "BaseBdev2", 00:10:33.502 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:33.502 "is_configured": true, 00:10:33.502 "data_offset": 0, 00:10:33.502 "data_size": 65536 00:10:33.502 }, 00:10:33.502 { 00:10:33.502 "name": "BaseBdev3", 00:10:33.502 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:33.502 "is_configured": true, 00:10:33.502 "data_offset": 0, 00:10:33.502 "data_size": 65536 00:10:33.502 }, 00:10:33.502 { 00:10:33.502 "name": "BaseBdev4", 00:10:33.502 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:33.502 "is_configured": true, 00:10:33.502 "data_offset": 0, 00:10:33.502 "data_size": 65536 00:10:33.502 } 00:10:33.502 ] 00:10:33.502 }' 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.502 03:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.761 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 891dea76-24f2-4b94-bd69-61d3e8af2bf4 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.020 [2024-11-20 03:17:23.460845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:34.020 [2024-11-20 03:17:23.460896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.020 [2024-11-20 03:17:23.460920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:34.020 [2024-11-20 03:17:23.461212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:34.020 [2024-11-20 03:17:23.461380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.020 [2024-11-20 03:17:23.461401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:34.020 [2024-11-20 03:17:23.461672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.020 NewBaseBdev 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.020 [ 00:10:34.020 { 00:10:34.020 "name": "NewBaseBdev", 00:10:34.020 "aliases": [ 00:10:34.020 "891dea76-24f2-4b94-bd69-61d3e8af2bf4" 00:10:34.020 ], 00:10:34.020 "product_name": "Malloc disk", 00:10:34.020 "block_size": 512, 00:10:34.020 "num_blocks": 65536, 00:10:34.020 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:34.020 "assigned_rate_limits": { 00:10:34.020 "rw_ios_per_sec": 0, 00:10:34.020 "rw_mbytes_per_sec": 0, 00:10:34.020 "r_mbytes_per_sec": 0, 00:10:34.020 "w_mbytes_per_sec": 0 00:10:34.020 }, 00:10:34.020 "claimed": true, 00:10:34.020 "claim_type": "exclusive_write", 00:10:34.020 "zoned": false, 00:10:34.020 "supported_io_types": { 00:10:34.020 "read": true, 00:10:34.020 "write": true, 00:10:34.020 "unmap": true, 00:10:34.020 "flush": true, 00:10:34.020 "reset": true, 00:10:34.020 "nvme_admin": false, 00:10:34.020 "nvme_io": false, 00:10:34.020 "nvme_io_md": false, 00:10:34.020 "write_zeroes": true, 00:10:34.020 "zcopy": true, 00:10:34.020 "get_zone_info": false, 00:10:34.020 "zone_management": false, 00:10:34.020 "zone_append": false, 00:10:34.020 "compare": false, 00:10:34.020 "compare_and_write": false, 00:10:34.020 "abort": true, 00:10:34.020 "seek_hole": false, 00:10:34.020 "seek_data": false, 00:10:34.020 "copy": true, 00:10:34.020 "nvme_iov_md": false 00:10:34.020 }, 00:10:34.020 "memory_domains": [ 00:10:34.020 { 00:10:34.020 "dma_device_id": "system", 00:10:34.020 "dma_device_type": 1 00:10:34.020 }, 00:10:34.020 { 00:10:34.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.020 "dma_device_type": 2 00:10:34.020 } 00:10:34.020 ], 00:10:34.020 "driver_specific": {} 00:10:34.020 } 00:10:34.020 ] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.020 "name": "Existed_Raid", 00:10:34.020 "uuid": "693c6802-b675-472e-b064-555a1142c354", 00:10:34.020 "strip_size_kb": 64, 00:10:34.020 "state": "online", 00:10:34.020 "raid_level": "concat", 00:10:34.020 "superblock": false, 00:10:34.020 "num_base_bdevs": 4, 00:10:34.020 "num_base_bdevs_discovered": 4, 00:10:34.020 "num_base_bdevs_operational": 4, 00:10:34.020 "base_bdevs_list": [ 00:10:34.020 { 00:10:34.020 "name": "NewBaseBdev", 00:10:34.020 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:34.020 "is_configured": true, 00:10:34.020 "data_offset": 0, 00:10:34.020 "data_size": 65536 00:10:34.020 }, 00:10:34.020 { 00:10:34.020 "name": "BaseBdev2", 00:10:34.020 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:34.020 "is_configured": true, 00:10:34.020 "data_offset": 0, 00:10:34.020 "data_size": 65536 00:10:34.020 }, 00:10:34.020 { 00:10:34.020 "name": "BaseBdev3", 00:10:34.020 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:34.020 "is_configured": true, 00:10:34.020 "data_offset": 0, 00:10:34.020 "data_size": 65536 00:10:34.020 }, 00:10:34.020 { 00:10:34.020 "name": "BaseBdev4", 00:10:34.020 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:34.020 "is_configured": true, 00:10:34.020 "data_offset": 0, 00:10:34.020 "data_size": 65536 00:10:34.020 } 00:10:34.020 ] 00:10:34.020 }' 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.020 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.279 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.279 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.279 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.279 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.279 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.279 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.537 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.537 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.537 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 [2024-11-20 03:17:23.920492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.537 03:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.537 "name": "Existed_Raid", 00:10:34.537 "aliases": [ 00:10:34.537 "693c6802-b675-472e-b064-555a1142c354" 00:10:34.537 ], 00:10:34.537 "product_name": "Raid Volume", 00:10:34.538 "block_size": 512, 00:10:34.538 "num_blocks": 262144, 00:10:34.538 "uuid": "693c6802-b675-472e-b064-555a1142c354", 00:10:34.538 "assigned_rate_limits": { 00:10:34.538 "rw_ios_per_sec": 0, 00:10:34.538 "rw_mbytes_per_sec": 0, 00:10:34.538 "r_mbytes_per_sec": 0, 00:10:34.538 "w_mbytes_per_sec": 0 00:10:34.538 }, 00:10:34.538 "claimed": false, 00:10:34.538 "zoned": false, 00:10:34.538 "supported_io_types": { 00:10:34.538 "read": true, 00:10:34.538 "write": true, 00:10:34.538 "unmap": true, 00:10:34.538 "flush": true, 00:10:34.538 "reset": true, 00:10:34.538 "nvme_admin": false, 00:10:34.538 "nvme_io": false, 00:10:34.538 "nvme_io_md": false, 00:10:34.538 "write_zeroes": true, 00:10:34.538 "zcopy": false, 00:10:34.538 "get_zone_info": false, 00:10:34.538 "zone_management": false, 00:10:34.538 "zone_append": false, 00:10:34.538 "compare": false, 00:10:34.538 "compare_and_write": false, 00:10:34.538 "abort": false, 00:10:34.538 "seek_hole": false, 00:10:34.538 "seek_data": false, 00:10:34.538 "copy": false, 00:10:34.538 "nvme_iov_md": false 00:10:34.538 }, 00:10:34.538 "memory_domains": [ 00:10:34.538 { 00:10:34.538 "dma_device_id": "system", 00:10:34.538 "dma_device_type": 1 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.538 "dma_device_type": 2 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "system", 00:10:34.538 "dma_device_type": 1 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.538 "dma_device_type": 2 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "system", 00:10:34.538 "dma_device_type": 1 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.538 "dma_device_type": 2 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "system", 00:10:34.538 "dma_device_type": 1 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.538 "dma_device_type": 2 00:10:34.538 } 00:10:34.538 ], 00:10:34.538 "driver_specific": { 00:10:34.538 "raid": { 00:10:34.538 "uuid": "693c6802-b675-472e-b064-555a1142c354", 00:10:34.538 "strip_size_kb": 64, 00:10:34.538 "state": "online", 00:10:34.538 "raid_level": "concat", 00:10:34.538 "superblock": false, 00:10:34.538 "num_base_bdevs": 4, 00:10:34.538 "num_base_bdevs_discovered": 4, 00:10:34.538 "num_base_bdevs_operational": 4, 00:10:34.538 "base_bdevs_list": [ 00:10:34.538 { 00:10:34.538 "name": "NewBaseBdev", 00:10:34.538 "uuid": "891dea76-24f2-4b94-bd69-61d3e8af2bf4", 00:10:34.538 "is_configured": true, 00:10:34.538 "data_offset": 0, 00:10:34.538 "data_size": 65536 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "name": "BaseBdev2", 00:10:34.538 "uuid": "46079c0b-9c4c-4fdf-9f39-001b3032e3b4", 00:10:34.538 "is_configured": true, 00:10:34.538 "data_offset": 0, 00:10:34.538 "data_size": 65536 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "name": "BaseBdev3", 00:10:34.538 "uuid": "91561f8a-6657-4a7b-8192-74792f6eac5f", 00:10:34.538 "is_configured": true, 00:10:34.538 "data_offset": 0, 00:10:34.538 "data_size": 65536 00:10:34.538 }, 00:10:34.538 { 00:10:34.538 "name": "BaseBdev4", 00:10:34.538 "uuid": "0318c1a4-3b8b-4b7b-956a-7fa2cef5974e", 00:10:34.538 "is_configured": true, 00:10:34.538 "data_offset": 0, 00:10:34.538 "data_size": 65536 00:10:34.538 } 00:10:34.538 ] 00:10:34.538 } 00:10:34.538 } 00:10:34.538 }' 00:10:34.538 03:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.538 BaseBdev2 00:10:34.538 BaseBdev3 00:10:34.538 BaseBdev4' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.538 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.797 [2024-11-20 03:17:24.247593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.797 [2024-11-20 03:17:24.247642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.797 [2024-11-20 03:17:24.247732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.797 [2024-11-20 03:17:24.247808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.797 [2024-11-20 03:17:24.247828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71125 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71125 ']' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71125 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71125 00:10:34.797 killing process with pid 71125 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71125' 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71125 00:10:34.797 [2024-11-20 03:17:24.289987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.797 03:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71125 00:10:35.363 [2024-11-20 03:17:24.702142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:36.301 00:10:36.301 real 0m11.670s 00:10:36.301 user 0m18.537s 00:10:36.301 sys 0m2.138s 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.301 ************************************ 00:10:36.301 END TEST raid_state_function_test 00:10:36.301 ************************************ 00:10:36.301 03:17:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:36.301 03:17:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:36.301 03:17:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.301 03:17:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.301 ************************************ 00:10:36.301 START TEST raid_state_function_test_sb 00:10:36.301 ************************************ 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.301 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71796 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.302 Process raid pid: 71796 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71796' 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71796 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71796 ']' 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.302 03:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 [2024-11-20 03:17:26.001215] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:36.561 [2024-11-20 03:17:26.001342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.561 [2024-11-20 03:17:26.178835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.820 [2024-11-20 03:17:26.293774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.078 [2024-11-20 03:17:26.501044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.078 [2024-11-20 03:17:26.501087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.338 [2024-11-20 03:17:26.858701] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.338 [2024-11-20 03:17:26.858756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.338 [2024-11-20 03:17:26.858766] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.338 [2024-11-20 03:17:26.858776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.338 [2024-11-20 03:17:26.858783] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.338 [2024-11-20 03:17:26.858792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.338 [2024-11-20 03:17:26.858798] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.338 [2024-11-20 03:17:26.858807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.338 "name": "Existed_Raid", 00:10:37.338 "uuid": "3a954861-863d-4686-b8c7-e80c469db8b2", 00:10:37.338 "strip_size_kb": 64, 00:10:37.338 "state": "configuring", 00:10:37.338 "raid_level": "concat", 00:10:37.338 "superblock": true, 00:10:37.338 "num_base_bdevs": 4, 00:10:37.338 "num_base_bdevs_discovered": 0, 00:10:37.338 "num_base_bdevs_operational": 4, 00:10:37.338 "base_bdevs_list": [ 00:10:37.338 { 00:10:37.338 "name": "BaseBdev1", 00:10:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.338 "is_configured": false, 00:10:37.338 "data_offset": 0, 00:10:37.338 "data_size": 0 00:10:37.338 }, 00:10:37.338 { 00:10:37.338 "name": "BaseBdev2", 00:10:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.338 "is_configured": false, 00:10:37.338 "data_offset": 0, 00:10:37.338 "data_size": 0 00:10:37.338 }, 00:10:37.338 { 00:10:37.338 "name": "BaseBdev3", 00:10:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.338 "is_configured": false, 00:10:37.338 "data_offset": 0, 00:10:37.338 "data_size": 0 00:10:37.338 }, 00:10:37.338 { 00:10:37.338 "name": "BaseBdev4", 00:10:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.338 "is_configured": false, 00:10:37.338 "data_offset": 0, 00:10:37.338 "data_size": 0 00:10:37.338 } 00:10:37.338 ] 00:10:37.338 }' 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.338 03:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 [2024-11-20 03:17:27.309847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.906 [2024-11-20 03:17:27.309893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 [2024-11-20 03:17:27.321838] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.906 [2024-11-20 03:17:27.321885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.906 [2024-11-20 03:17:27.321895] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.906 [2024-11-20 03:17:27.321904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.906 [2024-11-20 03:17:27.321910] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.906 [2024-11-20 03:17:27.321919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.906 [2024-11-20 03:17:27.321926] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.906 [2024-11-20 03:17:27.321934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 [2024-11-20 03:17:27.369267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.906 BaseBdev1 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.906 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 [ 00:10:37.907 { 00:10:37.907 "name": "BaseBdev1", 00:10:37.907 "aliases": [ 00:10:37.907 "1133bcc6-3f19-49b4-bf93-d28da0e02b96" 00:10:37.907 ], 00:10:37.907 "product_name": "Malloc disk", 00:10:37.907 "block_size": 512, 00:10:37.907 "num_blocks": 65536, 00:10:37.907 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:37.907 "assigned_rate_limits": { 00:10:37.907 "rw_ios_per_sec": 0, 00:10:37.907 "rw_mbytes_per_sec": 0, 00:10:37.907 "r_mbytes_per_sec": 0, 00:10:37.907 "w_mbytes_per_sec": 0 00:10:37.907 }, 00:10:37.907 "claimed": true, 00:10:37.907 "claim_type": "exclusive_write", 00:10:37.907 "zoned": false, 00:10:37.907 "supported_io_types": { 00:10:37.907 "read": true, 00:10:37.907 "write": true, 00:10:37.907 "unmap": true, 00:10:37.907 "flush": true, 00:10:37.907 "reset": true, 00:10:37.907 "nvme_admin": false, 00:10:37.907 "nvme_io": false, 00:10:37.907 "nvme_io_md": false, 00:10:37.907 "write_zeroes": true, 00:10:37.907 "zcopy": true, 00:10:37.907 "get_zone_info": false, 00:10:37.907 "zone_management": false, 00:10:37.907 "zone_append": false, 00:10:37.907 "compare": false, 00:10:37.907 "compare_and_write": false, 00:10:37.907 "abort": true, 00:10:37.907 "seek_hole": false, 00:10:37.907 "seek_data": false, 00:10:37.907 "copy": true, 00:10:37.907 "nvme_iov_md": false 00:10:37.907 }, 00:10:37.907 "memory_domains": [ 00:10:37.907 { 00:10:37.907 "dma_device_id": "system", 00:10:37.907 "dma_device_type": 1 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.907 "dma_device_type": 2 00:10:37.907 } 00:10:37.907 ], 00:10:37.907 "driver_specific": {} 00:10:37.907 } 00:10:37.907 ] 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.907 "name": "Existed_Raid", 00:10:37.907 "uuid": "aa01a06f-0436-4f58-94e7-372ca7038e6b", 00:10:37.907 "strip_size_kb": 64, 00:10:37.907 "state": "configuring", 00:10:37.907 "raid_level": "concat", 00:10:37.907 "superblock": true, 00:10:37.907 "num_base_bdevs": 4, 00:10:37.907 "num_base_bdevs_discovered": 1, 00:10:37.907 "num_base_bdevs_operational": 4, 00:10:37.907 "base_bdevs_list": [ 00:10:37.907 { 00:10:37.907 "name": "BaseBdev1", 00:10:37.907 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:37.907 "is_configured": true, 00:10:37.907 "data_offset": 2048, 00:10:37.907 "data_size": 63488 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "name": "BaseBdev2", 00:10:37.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.907 "is_configured": false, 00:10:37.907 "data_offset": 0, 00:10:37.907 "data_size": 0 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "name": "BaseBdev3", 00:10:37.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.907 "is_configured": false, 00:10:37.907 "data_offset": 0, 00:10:37.907 "data_size": 0 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "name": "BaseBdev4", 00:10:37.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.907 "is_configured": false, 00:10:37.907 "data_offset": 0, 00:10:37.907 "data_size": 0 00:10:37.907 } 00:10:37.907 ] 00:10:37.907 }' 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.907 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.475 [2024-11-20 03:17:27.884445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.475 [2024-11-20 03:17:27.884505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.475 [2024-11-20 03:17:27.892489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.475 [2024-11-20 03:17:27.894461] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.475 [2024-11-20 03:17:27.894507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.475 [2024-11-20 03:17:27.894518] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.475 [2024-11-20 03:17:27.894530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.475 [2024-11-20 03:17:27.894538] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.475 [2024-11-20 03:17:27.894547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.475 "name": "Existed_Raid", 00:10:38.475 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:38.475 "strip_size_kb": 64, 00:10:38.475 "state": "configuring", 00:10:38.475 "raid_level": "concat", 00:10:38.475 "superblock": true, 00:10:38.475 "num_base_bdevs": 4, 00:10:38.475 "num_base_bdevs_discovered": 1, 00:10:38.475 "num_base_bdevs_operational": 4, 00:10:38.475 "base_bdevs_list": [ 00:10:38.475 { 00:10:38.475 "name": "BaseBdev1", 00:10:38.475 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:38.475 "is_configured": true, 00:10:38.475 "data_offset": 2048, 00:10:38.475 "data_size": 63488 00:10:38.475 }, 00:10:38.475 { 00:10:38.475 "name": "BaseBdev2", 00:10:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.475 "is_configured": false, 00:10:38.475 "data_offset": 0, 00:10:38.475 "data_size": 0 00:10:38.475 }, 00:10:38.475 { 00:10:38.475 "name": "BaseBdev3", 00:10:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.475 "is_configured": false, 00:10:38.475 "data_offset": 0, 00:10:38.475 "data_size": 0 00:10:38.475 }, 00:10:38.475 { 00:10:38.475 "name": "BaseBdev4", 00:10:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.475 "is_configured": false, 00:10:38.475 "data_offset": 0, 00:10:38.475 "data_size": 0 00:10:38.475 } 00:10:38.475 ] 00:10:38.475 }' 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.475 03:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.735 [2024-11-20 03:17:28.346939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.735 BaseBdev2 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.735 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.000 [ 00:10:39.000 { 00:10:39.000 "name": "BaseBdev2", 00:10:39.000 "aliases": [ 00:10:39.000 "c258b98a-10e5-4cbc-9f4b-22c4b65651d9" 00:10:39.000 ], 00:10:39.000 "product_name": "Malloc disk", 00:10:39.000 "block_size": 512, 00:10:39.000 "num_blocks": 65536, 00:10:39.000 "uuid": "c258b98a-10e5-4cbc-9f4b-22c4b65651d9", 00:10:39.000 "assigned_rate_limits": { 00:10:39.000 "rw_ios_per_sec": 0, 00:10:39.000 "rw_mbytes_per_sec": 0, 00:10:39.000 "r_mbytes_per_sec": 0, 00:10:39.000 "w_mbytes_per_sec": 0 00:10:39.000 }, 00:10:39.000 "claimed": true, 00:10:39.000 "claim_type": "exclusive_write", 00:10:39.000 "zoned": false, 00:10:39.000 "supported_io_types": { 00:10:39.000 "read": true, 00:10:39.000 "write": true, 00:10:39.000 "unmap": true, 00:10:39.000 "flush": true, 00:10:39.000 "reset": true, 00:10:39.000 "nvme_admin": false, 00:10:39.000 "nvme_io": false, 00:10:39.000 "nvme_io_md": false, 00:10:39.000 "write_zeroes": true, 00:10:39.000 "zcopy": true, 00:10:39.000 "get_zone_info": false, 00:10:39.000 "zone_management": false, 00:10:39.000 "zone_append": false, 00:10:39.000 "compare": false, 00:10:39.000 "compare_and_write": false, 00:10:39.000 "abort": true, 00:10:39.000 "seek_hole": false, 00:10:39.000 "seek_data": false, 00:10:39.000 "copy": true, 00:10:39.000 "nvme_iov_md": false 00:10:39.000 }, 00:10:39.000 "memory_domains": [ 00:10:39.000 { 00:10:39.000 "dma_device_id": "system", 00:10:39.000 "dma_device_type": 1 00:10:39.000 }, 00:10:39.000 { 00:10:39.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.000 "dma_device_type": 2 00:10:39.000 } 00:10:39.000 ], 00:10:39.000 "driver_specific": {} 00:10:39.000 } 00:10:39.000 ] 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.000 "name": "Existed_Raid", 00:10:39.000 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:39.000 "strip_size_kb": 64, 00:10:39.000 "state": "configuring", 00:10:39.000 "raid_level": "concat", 00:10:39.000 "superblock": true, 00:10:39.000 "num_base_bdevs": 4, 00:10:39.000 "num_base_bdevs_discovered": 2, 00:10:39.000 "num_base_bdevs_operational": 4, 00:10:39.000 "base_bdevs_list": [ 00:10:39.000 { 00:10:39.000 "name": "BaseBdev1", 00:10:39.000 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:39.000 "is_configured": true, 00:10:39.000 "data_offset": 2048, 00:10:39.000 "data_size": 63488 00:10:39.000 }, 00:10:39.000 { 00:10:39.000 "name": "BaseBdev2", 00:10:39.000 "uuid": "c258b98a-10e5-4cbc-9f4b-22c4b65651d9", 00:10:39.000 "is_configured": true, 00:10:39.000 "data_offset": 2048, 00:10:39.000 "data_size": 63488 00:10:39.000 }, 00:10:39.000 { 00:10:39.000 "name": "BaseBdev3", 00:10:39.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.000 "is_configured": false, 00:10:39.000 "data_offset": 0, 00:10:39.000 "data_size": 0 00:10:39.000 }, 00:10:39.000 { 00:10:39.000 "name": "BaseBdev4", 00:10:39.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.000 "is_configured": false, 00:10:39.000 "data_offset": 0, 00:10:39.000 "data_size": 0 00:10:39.000 } 00:10:39.000 ] 00:10:39.000 }' 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.000 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.295 [2024-11-20 03:17:28.869588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.295 BaseBdev3 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.295 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 [ 00:10:39.296 { 00:10:39.296 "name": "BaseBdev3", 00:10:39.296 "aliases": [ 00:10:39.296 "e82a6639-cfeb-4777-867e-0a11dda5e676" 00:10:39.296 ], 00:10:39.296 "product_name": "Malloc disk", 00:10:39.296 "block_size": 512, 00:10:39.296 "num_blocks": 65536, 00:10:39.296 "uuid": "e82a6639-cfeb-4777-867e-0a11dda5e676", 00:10:39.296 "assigned_rate_limits": { 00:10:39.296 "rw_ios_per_sec": 0, 00:10:39.296 "rw_mbytes_per_sec": 0, 00:10:39.296 "r_mbytes_per_sec": 0, 00:10:39.296 "w_mbytes_per_sec": 0 00:10:39.296 }, 00:10:39.296 "claimed": true, 00:10:39.296 "claim_type": "exclusive_write", 00:10:39.296 "zoned": false, 00:10:39.296 "supported_io_types": { 00:10:39.296 "read": true, 00:10:39.296 "write": true, 00:10:39.296 "unmap": true, 00:10:39.296 "flush": true, 00:10:39.296 "reset": true, 00:10:39.296 "nvme_admin": false, 00:10:39.296 "nvme_io": false, 00:10:39.296 "nvme_io_md": false, 00:10:39.296 "write_zeroes": true, 00:10:39.296 "zcopy": true, 00:10:39.296 "get_zone_info": false, 00:10:39.296 "zone_management": false, 00:10:39.296 "zone_append": false, 00:10:39.296 "compare": false, 00:10:39.296 "compare_and_write": false, 00:10:39.296 "abort": true, 00:10:39.296 "seek_hole": false, 00:10:39.296 "seek_data": false, 00:10:39.296 "copy": true, 00:10:39.296 "nvme_iov_md": false 00:10:39.296 }, 00:10:39.296 "memory_domains": [ 00:10:39.296 { 00:10:39.296 "dma_device_id": "system", 00:10:39.296 "dma_device_type": 1 00:10:39.296 }, 00:10:39.296 { 00:10:39.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.296 "dma_device_type": 2 00:10:39.296 } 00:10:39.296 ], 00:10:39.296 "driver_specific": {} 00:10:39.296 } 00:10:39.296 ] 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.568 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.568 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.568 "name": "Existed_Raid", 00:10:39.568 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:39.568 "strip_size_kb": 64, 00:10:39.568 "state": "configuring", 00:10:39.568 "raid_level": "concat", 00:10:39.568 "superblock": true, 00:10:39.568 "num_base_bdevs": 4, 00:10:39.568 "num_base_bdevs_discovered": 3, 00:10:39.568 "num_base_bdevs_operational": 4, 00:10:39.568 "base_bdevs_list": [ 00:10:39.568 { 00:10:39.568 "name": "BaseBdev1", 00:10:39.568 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:39.568 "is_configured": true, 00:10:39.568 "data_offset": 2048, 00:10:39.568 "data_size": 63488 00:10:39.568 }, 00:10:39.568 { 00:10:39.568 "name": "BaseBdev2", 00:10:39.568 "uuid": "c258b98a-10e5-4cbc-9f4b-22c4b65651d9", 00:10:39.568 "is_configured": true, 00:10:39.568 "data_offset": 2048, 00:10:39.568 "data_size": 63488 00:10:39.568 }, 00:10:39.568 { 00:10:39.568 "name": "BaseBdev3", 00:10:39.568 "uuid": "e82a6639-cfeb-4777-867e-0a11dda5e676", 00:10:39.568 "is_configured": true, 00:10:39.568 "data_offset": 2048, 00:10:39.568 "data_size": 63488 00:10:39.568 }, 00:10:39.568 { 00:10:39.568 "name": "BaseBdev4", 00:10:39.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.568 "is_configured": false, 00:10:39.568 "data_offset": 0, 00:10:39.568 "data_size": 0 00:10:39.568 } 00:10:39.568 ] 00:10:39.568 }' 00:10:39.568 03:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.568 03:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.828 [2024-11-20 03:17:29.365589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.828 [2024-11-20 03:17:29.365863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.828 [2024-11-20 03:17:29.365878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.828 [2024-11-20 03:17:29.366153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.828 [2024-11-20 03:17:29.366319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.828 [2024-11-20 03:17:29.366332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.828 [2024-11-20 03:17:29.366477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.828 BaseBdev4 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.828 [ 00:10:39.828 { 00:10:39.828 "name": "BaseBdev4", 00:10:39.828 "aliases": [ 00:10:39.828 "729c5582-8b7d-4150-9a74-15b61b7d2bb6" 00:10:39.828 ], 00:10:39.828 "product_name": "Malloc disk", 00:10:39.828 "block_size": 512, 00:10:39.828 "num_blocks": 65536, 00:10:39.828 "uuid": "729c5582-8b7d-4150-9a74-15b61b7d2bb6", 00:10:39.828 "assigned_rate_limits": { 00:10:39.828 "rw_ios_per_sec": 0, 00:10:39.828 "rw_mbytes_per_sec": 0, 00:10:39.828 "r_mbytes_per_sec": 0, 00:10:39.828 "w_mbytes_per_sec": 0 00:10:39.828 }, 00:10:39.828 "claimed": true, 00:10:39.828 "claim_type": "exclusive_write", 00:10:39.828 "zoned": false, 00:10:39.828 "supported_io_types": { 00:10:39.828 "read": true, 00:10:39.828 "write": true, 00:10:39.828 "unmap": true, 00:10:39.828 "flush": true, 00:10:39.828 "reset": true, 00:10:39.828 "nvme_admin": false, 00:10:39.828 "nvme_io": false, 00:10:39.828 "nvme_io_md": false, 00:10:39.828 "write_zeroes": true, 00:10:39.828 "zcopy": true, 00:10:39.828 "get_zone_info": false, 00:10:39.828 "zone_management": false, 00:10:39.828 "zone_append": false, 00:10:39.828 "compare": false, 00:10:39.828 "compare_and_write": false, 00:10:39.828 "abort": true, 00:10:39.828 "seek_hole": false, 00:10:39.828 "seek_data": false, 00:10:39.828 "copy": true, 00:10:39.828 "nvme_iov_md": false 00:10:39.828 }, 00:10:39.828 "memory_domains": [ 00:10:39.828 { 00:10:39.828 "dma_device_id": "system", 00:10:39.828 "dma_device_type": 1 00:10:39.828 }, 00:10:39.828 { 00:10:39.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.828 "dma_device_type": 2 00:10:39.828 } 00:10:39.828 ], 00:10:39.828 "driver_specific": {} 00:10:39.828 } 00:10:39.828 ] 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.828 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.828 "name": "Existed_Raid", 00:10:39.828 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:39.828 "strip_size_kb": 64, 00:10:39.828 "state": "online", 00:10:39.828 "raid_level": "concat", 00:10:39.829 "superblock": true, 00:10:39.829 "num_base_bdevs": 4, 00:10:39.829 "num_base_bdevs_discovered": 4, 00:10:39.829 "num_base_bdevs_operational": 4, 00:10:39.829 "base_bdevs_list": [ 00:10:39.829 { 00:10:39.829 "name": "BaseBdev1", 00:10:39.829 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:39.829 "is_configured": true, 00:10:39.829 "data_offset": 2048, 00:10:39.829 "data_size": 63488 00:10:39.829 }, 00:10:39.829 { 00:10:39.829 "name": "BaseBdev2", 00:10:39.829 "uuid": "c258b98a-10e5-4cbc-9f4b-22c4b65651d9", 00:10:39.829 "is_configured": true, 00:10:39.829 "data_offset": 2048, 00:10:39.829 "data_size": 63488 00:10:39.829 }, 00:10:39.829 { 00:10:39.829 "name": "BaseBdev3", 00:10:39.829 "uuid": "e82a6639-cfeb-4777-867e-0a11dda5e676", 00:10:39.829 "is_configured": true, 00:10:39.829 "data_offset": 2048, 00:10:39.829 "data_size": 63488 00:10:39.829 }, 00:10:39.829 { 00:10:39.829 "name": "BaseBdev4", 00:10:39.829 "uuid": "729c5582-8b7d-4150-9a74-15b61b7d2bb6", 00:10:39.829 "is_configured": true, 00:10:39.829 "data_offset": 2048, 00:10:39.829 "data_size": 63488 00:10:39.829 } 00:10:39.829 ] 00:10:39.829 }' 00:10:39.829 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.829 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.396 [2024-11-20 03:17:29.817213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.396 "name": "Existed_Raid", 00:10:40.396 "aliases": [ 00:10:40.396 "c1c97714-1d5e-42dd-9c63-c337365a4188" 00:10:40.396 ], 00:10:40.396 "product_name": "Raid Volume", 00:10:40.396 "block_size": 512, 00:10:40.396 "num_blocks": 253952, 00:10:40.396 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:40.396 "assigned_rate_limits": { 00:10:40.396 "rw_ios_per_sec": 0, 00:10:40.396 "rw_mbytes_per_sec": 0, 00:10:40.396 "r_mbytes_per_sec": 0, 00:10:40.396 "w_mbytes_per_sec": 0 00:10:40.396 }, 00:10:40.396 "claimed": false, 00:10:40.396 "zoned": false, 00:10:40.396 "supported_io_types": { 00:10:40.396 "read": true, 00:10:40.396 "write": true, 00:10:40.396 "unmap": true, 00:10:40.396 "flush": true, 00:10:40.396 "reset": true, 00:10:40.396 "nvme_admin": false, 00:10:40.396 "nvme_io": false, 00:10:40.396 "nvme_io_md": false, 00:10:40.396 "write_zeroes": true, 00:10:40.396 "zcopy": false, 00:10:40.396 "get_zone_info": false, 00:10:40.396 "zone_management": false, 00:10:40.396 "zone_append": false, 00:10:40.396 "compare": false, 00:10:40.396 "compare_and_write": false, 00:10:40.396 "abort": false, 00:10:40.396 "seek_hole": false, 00:10:40.396 "seek_data": false, 00:10:40.396 "copy": false, 00:10:40.396 "nvme_iov_md": false 00:10:40.396 }, 00:10:40.396 "memory_domains": [ 00:10:40.396 { 00:10:40.396 "dma_device_id": "system", 00:10:40.396 "dma_device_type": 1 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.396 "dma_device_type": 2 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "system", 00:10:40.396 "dma_device_type": 1 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.396 "dma_device_type": 2 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "system", 00:10:40.396 "dma_device_type": 1 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.396 "dma_device_type": 2 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "system", 00:10:40.396 "dma_device_type": 1 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.396 "dma_device_type": 2 00:10:40.396 } 00:10:40.396 ], 00:10:40.396 "driver_specific": { 00:10:40.396 "raid": { 00:10:40.396 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:40.396 "strip_size_kb": 64, 00:10:40.396 "state": "online", 00:10:40.396 "raid_level": "concat", 00:10:40.396 "superblock": true, 00:10:40.396 "num_base_bdevs": 4, 00:10:40.396 "num_base_bdevs_discovered": 4, 00:10:40.396 "num_base_bdevs_operational": 4, 00:10:40.396 "base_bdevs_list": [ 00:10:40.396 { 00:10:40.396 "name": "BaseBdev1", 00:10:40.396 "uuid": "1133bcc6-3f19-49b4-bf93-d28da0e02b96", 00:10:40.396 "is_configured": true, 00:10:40.396 "data_offset": 2048, 00:10:40.396 "data_size": 63488 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "name": "BaseBdev2", 00:10:40.396 "uuid": "c258b98a-10e5-4cbc-9f4b-22c4b65651d9", 00:10:40.396 "is_configured": true, 00:10:40.396 "data_offset": 2048, 00:10:40.396 "data_size": 63488 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "name": "BaseBdev3", 00:10:40.396 "uuid": "e82a6639-cfeb-4777-867e-0a11dda5e676", 00:10:40.396 "is_configured": true, 00:10:40.396 "data_offset": 2048, 00:10:40.396 "data_size": 63488 00:10:40.396 }, 00:10:40.396 { 00:10:40.396 "name": "BaseBdev4", 00:10:40.396 "uuid": "729c5582-8b7d-4150-9a74-15b61b7d2bb6", 00:10:40.396 "is_configured": true, 00:10:40.396 "data_offset": 2048, 00:10:40.396 "data_size": 63488 00:10:40.396 } 00:10:40.396 ] 00:10:40.396 } 00:10:40.396 } 00:10:40.396 }' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.396 BaseBdev2 00:10:40.396 BaseBdev3 00:10:40.396 BaseBdev4' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.396 03:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.396 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.396 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.396 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.396 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.655 [2024-11-20 03:17:30.136427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.655 [2024-11-20 03:17:30.136471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.655 [2024-11-20 03:17:30.136528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.655 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.914 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.914 "name": "Existed_Raid", 00:10:40.914 "uuid": "c1c97714-1d5e-42dd-9c63-c337365a4188", 00:10:40.914 "strip_size_kb": 64, 00:10:40.914 "state": "offline", 00:10:40.914 "raid_level": "concat", 00:10:40.914 "superblock": true, 00:10:40.914 "num_base_bdevs": 4, 00:10:40.914 "num_base_bdevs_discovered": 3, 00:10:40.914 "num_base_bdevs_operational": 3, 00:10:40.914 "base_bdevs_list": [ 00:10:40.914 { 00:10:40.914 "name": null, 00:10:40.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.914 "is_configured": false, 00:10:40.914 "data_offset": 0, 00:10:40.914 "data_size": 63488 00:10:40.914 }, 00:10:40.914 { 00:10:40.914 "name": "BaseBdev2", 00:10:40.914 "uuid": "c258b98a-10e5-4cbc-9f4b-22c4b65651d9", 00:10:40.914 "is_configured": true, 00:10:40.914 "data_offset": 2048, 00:10:40.914 "data_size": 63488 00:10:40.914 }, 00:10:40.914 { 00:10:40.914 "name": "BaseBdev3", 00:10:40.914 "uuid": "e82a6639-cfeb-4777-867e-0a11dda5e676", 00:10:40.914 "is_configured": true, 00:10:40.914 "data_offset": 2048, 00:10:40.914 "data_size": 63488 00:10:40.914 }, 00:10:40.914 { 00:10:40.914 "name": "BaseBdev4", 00:10:40.914 "uuid": "729c5582-8b7d-4150-9a74-15b61b7d2bb6", 00:10:40.914 "is_configured": true, 00:10:40.914 "data_offset": 2048, 00:10:40.914 "data_size": 63488 00:10:40.914 } 00:10:40.914 ] 00:10:40.914 }' 00:10:40.914 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.914 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.173 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 [2024-11-20 03:17:30.748509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.431 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.431 [2024-11-20 03:17:30.904608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.431 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.431 [2024-11-20 03:17:31.061346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:41.431 [2024-11-20 03:17:31.061405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.691 BaseBdev2 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.691 [ 00:10:41.691 { 00:10:41.691 "name": "BaseBdev2", 00:10:41.691 "aliases": [ 00:10:41.691 "487a0df2-a720-4086-ae7b-9c5e6ab866e9" 00:10:41.691 ], 00:10:41.691 "product_name": "Malloc disk", 00:10:41.691 "block_size": 512, 00:10:41.691 "num_blocks": 65536, 00:10:41.691 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:41.691 "assigned_rate_limits": { 00:10:41.691 "rw_ios_per_sec": 0, 00:10:41.691 "rw_mbytes_per_sec": 0, 00:10:41.691 "r_mbytes_per_sec": 0, 00:10:41.691 "w_mbytes_per_sec": 0 00:10:41.691 }, 00:10:41.691 "claimed": false, 00:10:41.691 "zoned": false, 00:10:41.691 "supported_io_types": { 00:10:41.691 "read": true, 00:10:41.691 "write": true, 00:10:41.691 "unmap": true, 00:10:41.691 "flush": true, 00:10:41.691 "reset": true, 00:10:41.691 "nvme_admin": false, 00:10:41.691 "nvme_io": false, 00:10:41.691 "nvme_io_md": false, 00:10:41.691 "write_zeroes": true, 00:10:41.691 "zcopy": true, 00:10:41.691 "get_zone_info": false, 00:10:41.691 "zone_management": false, 00:10:41.691 "zone_append": false, 00:10:41.691 "compare": false, 00:10:41.691 "compare_and_write": false, 00:10:41.691 "abort": true, 00:10:41.691 "seek_hole": false, 00:10:41.691 "seek_data": false, 00:10:41.691 "copy": true, 00:10:41.691 "nvme_iov_md": false 00:10:41.691 }, 00:10:41.691 "memory_domains": [ 00:10:41.691 { 00:10:41.691 "dma_device_id": "system", 00:10:41.691 "dma_device_type": 1 00:10:41.691 }, 00:10:41.691 { 00:10:41.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.691 "dma_device_type": 2 00:10:41.691 } 00:10:41.691 ], 00:10:41.691 "driver_specific": {} 00:10:41.691 } 00:10:41.691 ] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.691 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.951 BaseBdev3 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.951 [ 00:10:41.951 { 00:10:41.951 "name": "BaseBdev3", 00:10:41.951 "aliases": [ 00:10:41.951 "7b39eb5c-efa1-41a5-93bf-50ab67513f89" 00:10:41.951 ], 00:10:41.951 "product_name": "Malloc disk", 00:10:41.951 "block_size": 512, 00:10:41.951 "num_blocks": 65536, 00:10:41.951 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:41.951 "assigned_rate_limits": { 00:10:41.951 "rw_ios_per_sec": 0, 00:10:41.951 "rw_mbytes_per_sec": 0, 00:10:41.951 "r_mbytes_per_sec": 0, 00:10:41.951 "w_mbytes_per_sec": 0 00:10:41.951 }, 00:10:41.951 "claimed": false, 00:10:41.951 "zoned": false, 00:10:41.951 "supported_io_types": { 00:10:41.951 "read": true, 00:10:41.951 "write": true, 00:10:41.951 "unmap": true, 00:10:41.951 "flush": true, 00:10:41.951 "reset": true, 00:10:41.951 "nvme_admin": false, 00:10:41.951 "nvme_io": false, 00:10:41.951 "nvme_io_md": false, 00:10:41.951 "write_zeroes": true, 00:10:41.951 "zcopy": true, 00:10:41.951 "get_zone_info": false, 00:10:41.951 "zone_management": false, 00:10:41.951 "zone_append": false, 00:10:41.951 "compare": false, 00:10:41.951 "compare_and_write": false, 00:10:41.951 "abort": true, 00:10:41.951 "seek_hole": false, 00:10:41.951 "seek_data": false, 00:10:41.951 "copy": true, 00:10:41.951 "nvme_iov_md": false 00:10:41.951 }, 00:10:41.951 "memory_domains": [ 00:10:41.951 { 00:10:41.951 "dma_device_id": "system", 00:10:41.951 "dma_device_type": 1 00:10:41.951 }, 00:10:41.951 { 00:10:41.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.951 "dma_device_type": 2 00:10:41.951 } 00:10:41.951 ], 00:10:41.951 "driver_specific": {} 00:10:41.951 } 00:10:41.951 ] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.951 BaseBdev4 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.951 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 [ 00:10:41.952 { 00:10:41.952 "name": "BaseBdev4", 00:10:41.952 "aliases": [ 00:10:41.952 "6eb657df-f5ed-4384-bbb1-d504aebf0a7b" 00:10:41.952 ], 00:10:41.952 "product_name": "Malloc disk", 00:10:41.952 "block_size": 512, 00:10:41.952 "num_blocks": 65536, 00:10:41.952 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:41.952 "assigned_rate_limits": { 00:10:41.952 "rw_ios_per_sec": 0, 00:10:41.952 "rw_mbytes_per_sec": 0, 00:10:41.952 "r_mbytes_per_sec": 0, 00:10:41.952 "w_mbytes_per_sec": 0 00:10:41.952 }, 00:10:41.952 "claimed": false, 00:10:41.952 "zoned": false, 00:10:41.952 "supported_io_types": { 00:10:41.952 "read": true, 00:10:41.952 "write": true, 00:10:41.952 "unmap": true, 00:10:41.952 "flush": true, 00:10:41.952 "reset": true, 00:10:41.952 "nvme_admin": false, 00:10:41.952 "nvme_io": false, 00:10:41.952 "nvme_io_md": false, 00:10:41.952 "write_zeroes": true, 00:10:41.952 "zcopy": true, 00:10:41.952 "get_zone_info": false, 00:10:41.952 "zone_management": false, 00:10:41.952 "zone_append": false, 00:10:41.952 "compare": false, 00:10:41.952 "compare_and_write": false, 00:10:41.952 "abort": true, 00:10:41.952 "seek_hole": false, 00:10:41.952 "seek_data": false, 00:10:41.952 "copy": true, 00:10:41.952 "nvme_iov_md": false 00:10:41.952 }, 00:10:41.952 "memory_domains": [ 00:10:41.952 { 00:10:41.952 "dma_device_id": "system", 00:10:41.952 "dma_device_type": 1 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.952 "dma_device_type": 2 00:10:41.952 } 00:10:41.952 ], 00:10:41.952 "driver_specific": {} 00:10:41.952 } 00:10:41.952 ] 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 [2024-11-20 03:17:31.459868] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.952 [2024-11-20 03:17:31.459916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.952 [2024-11-20 03:17:31.459939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.952 [2024-11-20 03:17:31.461778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.952 [2024-11-20 03:17:31.461835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.952 "name": "Existed_Raid", 00:10:41.952 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:41.952 "strip_size_kb": 64, 00:10:41.952 "state": "configuring", 00:10:41.952 "raid_level": "concat", 00:10:41.952 "superblock": true, 00:10:41.952 "num_base_bdevs": 4, 00:10:41.952 "num_base_bdevs_discovered": 3, 00:10:41.952 "num_base_bdevs_operational": 4, 00:10:41.952 "base_bdevs_list": [ 00:10:41.952 { 00:10:41.952 "name": "BaseBdev1", 00:10:41.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.952 "is_configured": false, 00:10:41.952 "data_offset": 0, 00:10:41.952 "data_size": 0 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "name": "BaseBdev2", 00:10:41.952 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:41.952 "is_configured": true, 00:10:41.952 "data_offset": 2048, 00:10:41.952 "data_size": 63488 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "name": "BaseBdev3", 00:10:41.952 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:41.952 "is_configured": true, 00:10:41.952 "data_offset": 2048, 00:10:41.952 "data_size": 63488 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "name": "BaseBdev4", 00:10:41.952 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:41.952 "is_configured": true, 00:10:41.952 "data_offset": 2048, 00:10:41.952 "data_size": 63488 00:10:41.952 } 00:10:41.952 ] 00:10:41.952 }' 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.952 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.519 [2024-11-20 03:17:31.871177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.519 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.520 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.520 "name": "Existed_Raid", 00:10:42.520 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:42.520 "strip_size_kb": 64, 00:10:42.520 "state": "configuring", 00:10:42.520 "raid_level": "concat", 00:10:42.520 "superblock": true, 00:10:42.520 "num_base_bdevs": 4, 00:10:42.520 "num_base_bdevs_discovered": 2, 00:10:42.520 "num_base_bdevs_operational": 4, 00:10:42.520 "base_bdevs_list": [ 00:10:42.520 { 00:10:42.520 "name": "BaseBdev1", 00:10:42.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.520 "is_configured": false, 00:10:42.520 "data_offset": 0, 00:10:42.520 "data_size": 0 00:10:42.520 }, 00:10:42.520 { 00:10:42.520 "name": null, 00:10:42.520 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:42.520 "is_configured": false, 00:10:42.520 "data_offset": 0, 00:10:42.520 "data_size": 63488 00:10:42.520 }, 00:10:42.520 { 00:10:42.520 "name": "BaseBdev3", 00:10:42.520 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:42.520 "is_configured": true, 00:10:42.520 "data_offset": 2048, 00:10:42.520 "data_size": 63488 00:10:42.520 }, 00:10:42.520 { 00:10:42.520 "name": "BaseBdev4", 00:10:42.520 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:42.520 "is_configured": true, 00:10:42.520 "data_offset": 2048, 00:10:42.520 "data_size": 63488 00:10:42.520 } 00:10:42.520 ] 00:10:42.520 }' 00:10:42.520 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.520 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.779 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.779 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.779 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.779 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.779 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.038 [2024-11-20 03:17:32.464424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.038 BaseBdev1 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.038 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.038 [ 00:10:43.038 { 00:10:43.038 "name": "BaseBdev1", 00:10:43.038 "aliases": [ 00:10:43.038 "c3c3156c-253e-4e65-a854-ad4d16a31e81" 00:10:43.038 ], 00:10:43.038 "product_name": "Malloc disk", 00:10:43.038 "block_size": 512, 00:10:43.038 "num_blocks": 65536, 00:10:43.038 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:43.038 "assigned_rate_limits": { 00:10:43.038 "rw_ios_per_sec": 0, 00:10:43.038 "rw_mbytes_per_sec": 0, 00:10:43.038 "r_mbytes_per_sec": 0, 00:10:43.038 "w_mbytes_per_sec": 0 00:10:43.038 }, 00:10:43.038 "claimed": true, 00:10:43.038 "claim_type": "exclusive_write", 00:10:43.038 "zoned": false, 00:10:43.038 "supported_io_types": { 00:10:43.038 "read": true, 00:10:43.038 "write": true, 00:10:43.038 "unmap": true, 00:10:43.038 "flush": true, 00:10:43.038 "reset": true, 00:10:43.038 "nvme_admin": false, 00:10:43.038 "nvme_io": false, 00:10:43.038 "nvme_io_md": false, 00:10:43.038 "write_zeroes": true, 00:10:43.038 "zcopy": true, 00:10:43.038 "get_zone_info": false, 00:10:43.038 "zone_management": false, 00:10:43.038 "zone_append": false, 00:10:43.038 "compare": false, 00:10:43.039 "compare_and_write": false, 00:10:43.039 "abort": true, 00:10:43.039 "seek_hole": false, 00:10:43.039 "seek_data": false, 00:10:43.039 "copy": true, 00:10:43.039 "nvme_iov_md": false 00:10:43.039 }, 00:10:43.039 "memory_domains": [ 00:10:43.039 { 00:10:43.039 "dma_device_id": "system", 00:10:43.039 "dma_device_type": 1 00:10:43.039 }, 00:10:43.039 { 00:10:43.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.039 "dma_device_type": 2 00:10:43.039 } 00:10:43.039 ], 00:10:43.039 "driver_specific": {} 00:10:43.039 } 00:10:43.039 ] 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.039 "name": "Existed_Raid", 00:10:43.039 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:43.039 "strip_size_kb": 64, 00:10:43.039 "state": "configuring", 00:10:43.039 "raid_level": "concat", 00:10:43.039 "superblock": true, 00:10:43.039 "num_base_bdevs": 4, 00:10:43.039 "num_base_bdevs_discovered": 3, 00:10:43.039 "num_base_bdevs_operational": 4, 00:10:43.039 "base_bdevs_list": [ 00:10:43.039 { 00:10:43.039 "name": "BaseBdev1", 00:10:43.039 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:43.039 "is_configured": true, 00:10:43.039 "data_offset": 2048, 00:10:43.039 "data_size": 63488 00:10:43.039 }, 00:10:43.039 { 00:10:43.039 "name": null, 00:10:43.039 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:43.039 "is_configured": false, 00:10:43.039 "data_offset": 0, 00:10:43.039 "data_size": 63488 00:10:43.039 }, 00:10:43.039 { 00:10:43.039 "name": "BaseBdev3", 00:10:43.039 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:43.039 "is_configured": true, 00:10:43.039 "data_offset": 2048, 00:10:43.039 "data_size": 63488 00:10:43.039 }, 00:10:43.039 { 00:10:43.039 "name": "BaseBdev4", 00:10:43.039 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:43.039 "is_configured": true, 00:10:43.039 "data_offset": 2048, 00:10:43.039 "data_size": 63488 00:10:43.039 } 00:10:43.039 ] 00:10:43.039 }' 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.039 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.607 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.607 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.607 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.607 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.607 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.607 [2024-11-20 03:17:33.011646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.607 "name": "Existed_Raid", 00:10:43.607 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:43.607 "strip_size_kb": 64, 00:10:43.607 "state": "configuring", 00:10:43.607 "raid_level": "concat", 00:10:43.607 "superblock": true, 00:10:43.607 "num_base_bdevs": 4, 00:10:43.607 "num_base_bdevs_discovered": 2, 00:10:43.607 "num_base_bdevs_operational": 4, 00:10:43.607 "base_bdevs_list": [ 00:10:43.607 { 00:10:43.607 "name": "BaseBdev1", 00:10:43.607 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:43.607 "is_configured": true, 00:10:43.607 "data_offset": 2048, 00:10:43.607 "data_size": 63488 00:10:43.607 }, 00:10:43.607 { 00:10:43.607 "name": null, 00:10:43.607 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:43.607 "is_configured": false, 00:10:43.607 "data_offset": 0, 00:10:43.607 "data_size": 63488 00:10:43.607 }, 00:10:43.607 { 00:10:43.607 "name": null, 00:10:43.607 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:43.607 "is_configured": false, 00:10:43.607 "data_offset": 0, 00:10:43.607 "data_size": 63488 00:10:43.607 }, 00:10:43.607 { 00:10:43.607 "name": "BaseBdev4", 00:10:43.607 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:43.607 "is_configured": true, 00:10:43.607 "data_offset": 2048, 00:10:43.607 "data_size": 63488 00:10:43.607 } 00:10:43.607 ] 00:10:43.607 }' 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.607 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.865 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.865 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.866 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.866 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.866 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.125 [2024-11-20 03:17:33.518764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.125 "name": "Existed_Raid", 00:10:44.125 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:44.125 "strip_size_kb": 64, 00:10:44.125 "state": "configuring", 00:10:44.125 "raid_level": "concat", 00:10:44.125 "superblock": true, 00:10:44.125 "num_base_bdevs": 4, 00:10:44.125 "num_base_bdevs_discovered": 3, 00:10:44.125 "num_base_bdevs_operational": 4, 00:10:44.125 "base_bdevs_list": [ 00:10:44.125 { 00:10:44.125 "name": "BaseBdev1", 00:10:44.125 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:44.125 "is_configured": true, 00:10:44.125 "data_offset": 2048, 00:10:44.125 "data_size": 63488 00:10:44.125 }, 00:10:44.125 { 00:10:44.125 "name": null, 00:10:44.125 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:44.125 "is_configured": false, 00:10:44.125 "data_offset": 0, 00:10:44.125 "data_size": 63488 00:10:44.125 }, 00:10:44.125 { 00:10:44.125 "name": "BaseBdev3", 00:10:44.125 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:44.125 "is_configured": true, 00:10:44.125 "data_offset": 2048, 00:10:44.125 "data_size": 63488 00:10:44.125 }, 00:10:44.125 { 00:10:44.125 "name": "BaseBdev4", 00:10:44.125 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:44.125 "is_configured": true, 00:10:44.125 "data_offset": 2048, 00:10:44.125 "data_size": 63488 00:10:44.125 } 00:10:44.125 ] 00:10:44.125 }' 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.125 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.384 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.384 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.384 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.384 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.384 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.384 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.384 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.384 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.384 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.643 [2024-11-20 03:17:34.017974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.643 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.643 "name": "Existed_Raid", 00:10:44.643 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:44.643 "strip_size_kb": 64, 00:10:44.643 "state": "configuring", 00:10:44.643 "raid_level": "concat", 00:10:44.643 "superblock": true, 00:10:44.643 "num_base_bdevs": 4, 00:10:44.643 "num_base_bdevs_discovered": 2, 00:10:44.643 "num_base_bdevs_operational": 4, 00:10:44.643 "base_bdevs_list": [ 00:10:44.643 { 00:10:44.643 "name": null, 00:10:44.643 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:44.643 "is_configured": false, 00:10:44.643 "data_offset": 0, 00:10:44.643 "data_size": 63488 00:10:44.643 }, 00:10:44.643 { 00:10:44.643 "name": null, 00:10:44.643 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:44.643 "is_configured": false, 00:10:44.643 "data_offset": 0, 00:10:44.643 "data_size": 63488 00:10:44.643 }, 00:10:44.643 { 00:10:44.643 "name": "BaseBdev3", 00:10:44.643 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:44.643 "is_configured": true, 00:10:44.643 "data_offset": 2048, 00:10:44.643 "data_size": 63488 00:10:44.643 }, 00:10:44.643 { 00:10:44.643 "name": "BaseBdev4", 00:10:44.643 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:44.644 "is_configured": true, 00:10:44.644 "data_offset": 2048, 00:10:44.644 "data_size": 63488 00:10:44.644 } 00:10:44.644 ] 00:10:44.644 }' 00:10:44.644 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.644 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 [2024-11-20 03:17:34.616383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.212 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.212 "name": "Existed_Raid", 00:10:45.212 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:45.212 "strip_size_kb": 64, 00:10:45.212 "state": "configuring", 00:10:45.212 "raid_level": "concat", 00:10:45.212 "superblock": true, 00:10:45.212 "num_base_bdevs": 4, 00:10:45.212 "num_base_bdevs_discovered": 3, 00:10:45.212 "num_base_bdevs_operational": 4, 00:10:45.212 "base_bdevs_list": [ 00:10:45.212 { 00:10:45.212 "name": null, 00:10:45.212 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:45.212 "is_configured": false, 00:10:45.212 "data_offset": 0, 00:10:45.212 "data_size": 63488 00:10:45.212 }, 00:10:45.212 { 00:10:45.212 "name": "BaseBdev2", 00:10:45.212 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:45.212 "is_configured": true, 00:10:45.212 "data_offset": 2048, 00:10:45.212 "data_size": 63488 00:10:45.212 }, 00:10:45.212 { 00:10:45.212 "name": "BaseBdev3", 00:10:45.212 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:45.212 "is_configured": true, 00:10:45.212 "data_offset": 2048, 00:10:45.212 "data_size": 63488 00:10:45.212 }, 00:10:45.212 { 00:10:45.212 "name": "BaseBdev4", 00:10:45.212 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:45.212 "is_configured": true, 00:10:45.213 "data_offset": 2048, 00:10:45.213 "data_size": 63488 00:10:45.213 } 00:10:45.213 ] 00:10:45.213 }' 00:10:45.213 03:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.213 03:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.471 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.472 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.472 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.472 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.472 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3c3156c-253e-4e65-a854-ad4d16a31e81 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.732 [2024-11-20 03:17:35.216043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.732 [2024-11-20 03:17:35.216282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.732 [2024-11-20 03:17:35.216296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:45.732 [2024-11-20 03:17:35.216559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:45.732 [2024-11-20 03:17:35.216748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.732 [2024-11-20 03:17:35.216763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:45.732 [2024-11-20 03:17:35.216880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.732 NewBaseBdev 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.732 [ 00:10:45.732 { 00:10:45.732 "name": "NewBaseBdev", 00:10:45.732 "aliases": [ 00:10:45.732 "c3c3156c-253e-4e65-a854-ad4d16a31e81" 00:10:45.732 ], 00:10:45.732 "product_name": "Malloc disk", 00:10:45.732 "block_size": 512, 00:10:45.732 "num_blocks": 65536, 00:10:45.732 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:45.732 "assigned_rate_limits": { 00:10:45.732 "rw_ios_per_sec": 0, 00:10:45.732 "rw_mbytes_per_sec": 0, 00:10:45.732 "r_mbytes_per_sec": 0, 00:10:45.732 "w_mbytes_per_sec": 0 00:10:45.732 }, 00:10:45.732 "claimed": true, 00:10:45.732 "claim_type": "exclusive_write", 00:10:45.732 "zoned": false, 00:10:45.732 "supported_io_types": { 00:10:45.732 "read": true, 00:10:45.732 "write": true, 00:10:45.732 "unmap": true, 00:10:45.732 "flush": true, 00:10:45.732 "reset": true, 00:10:45.732 "nvme_admin": false, 00:10:45.732 "nvme_io": false, 00:10:45.732 "nvme_io_md": false, 00:10:45.732 "write_zeroes": true, 00:10:45.732 "zcopy": true, 00:10:45.732 "get_zone_info": false, 00:10:45.732 "zone_management": false, 00:10:45.732 "zone_append": false, 00:10:45.732 "compare": false, 00:10:45.732 "compare_and_write": false, 00:10:45.732 "abort": true, 00:10:45.732 "seek_hole": false, 00:10:45.732 "seek_data": false, 00:10:45.732 "copy": true, 00:10:45.732 "nvme_iov_md": false 00:10:45.732 }, 00:10:45.732 "memory_domains": [ 00:10:45.732 { 00:10:45.732 "dma_device_id": "system", 00:10:45.732 "dma_device_type": 1 00:10:45.732 }, 00:10:45.732 { 00:10:45.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.732 "dma_device_type": 2 00:10:45.732 } 00:10:45.732 ], 00:10:45.732 "driver_specific": {} 00:10:45.732 } 00:10:45.732 ] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.732 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.733 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.733 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.733 "name": "Existed_Raid", 00:10:45.733 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:45.733 "strip_size_kb": 64, 00:10:45.733 "state": "online", 00:10:45.733 "raid_level": "concat", 00:10:45.733 "superblock": true, 00:10:45.733 "num_base_bdevs": 4, 00:10:45.733 "num_base_bdevs_discovered": 4, 00:10:45.733 "num_base_bdevs_operational": 4, 00:10:45.733 "base_bdevs_list": [ 00:10:45.733 { 00:10:45.733 "name": "NewBaseBdev", 00:10:45.733 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:45.733 "is_configured": true, 00:10:45.733 "data_offset": 2048, 00:10:45.733 "data_size": 63488 00:10:45.733 }, 00:10:45.733 { 00:10:45.733 "name": "BaseBdev2", 00:10:45.733 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:45.733 "is_configured": true, 00:10:45.733 "data_offset": 2048, 00:10:45.733 "data_size": 63488 00:10:45.733 }, 00:10:45.733 { 00:10:45.733 "name": "BaseBdev3", 00:10:45.733 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:45.733 "is_configured": true, 00:10:45.733 "data_offset": 2048, 00:10:45.733 "data_size": 63488 00:10:45.733 }, 00:10:45.733 { 00:10:45.733 "name": "BaseBdev4", 00:10:45.733 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:45.733 "is_configured": true, 00:10:45.733 "data_offset": 2048, 00:10:45.733 "data_size": 63488 00:10:45.733 } 00:10:45.733 ] 00:10:45.733 }' 00:10:45.733 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.733 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.300 [2024-11-20 03:17:35.723662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.300 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.300 "name": "Existed_Raid", 00:10:46.300 "aliases": [ 00:10:46.300 "f9ef7b47-8245-45aa-add7-dd7d9f55d7da" 00:10:46.300 ], 00:10:46.300 "product_name": "Raid Volume", 00:10:46.300 "block_size": 512, 00:10:46.300 "num_blocks": 253952, 00:10:46.300 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:46.300 "assigned_rate_limits": { 00:10:46.300 "rw_ios_per_sec": 0, 00:10:46.300 "rw_mbytes_per_sec": 0, 00:10:46.300 "r_mbytes_per_sec": 0, 00:10:46.300 "w_mbytes_per_sec": 0 00:10:46.300 }, 00:10:46.300 "claimed": false, 00:10:46.300 "zoned": false, 00:10:46.300 "supported_io_types": { 00:10:46.300 "read": true, 00:10:46.300 "write": true, 00:10:46.300 "unmap": true, 00:10:46.300 "flush": true, 00:10:46.300 "reset": true, 00:10:46.300 "nvme_admin": false, 00:10:46.300 "nvme_io": false, 00:10:46.300 "nvme_io_md": false, 00:10:46.300 "write_zeroes": true, 00:10:46.300 "zcopy": false, 00:10:46.300 "get_zone_info": false, 00:10:46.300 "zone_management": false, 00:10:46.300 "zone_append": false, 00:10:46.300 "compare": false, 00:10:46.300 "compare_and_write": false, 00:10:46.300 "abort": false, 00:10:46.301 "seek_hole": false, 00:10:46.301 "seek_data": false, 00:10:46.301 "copy": false, 00:10:46.301 "nvme_iov_md": false 00:10:46.301 }, 00:10:46.301 "memory_domains": [ 00:10:46.301 { 00:10:46.301 "dma_device_id": "system", 00:10:46.301 "dma_device_type": 1 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.301 "dma_device_type": 2 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "system", 00:10:46.301 "dma_device_type": 1 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.301 "dma_device_type": 2 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "system", 00:10:46.301 "dma_device_type": 1 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.301 "dma_device_type": 2 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "system", 00:10:46.301 "dma_device_type": 1 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.301 "dma_device_type": 2 00:10:46.301 } 00:10:46.301 ], 00:10:46.301 "driver_specific": { 00:10:46.301 "raid": { 00:10:46.301 "uuid": "f9ef7b47-8245-45aa-add7-dd7d9f55d7da", 00:10:46.301 "strip_size_kb": 64, 00:10:46.301 "state": "online", 00:10:46.301 "raid_level": "concat", 00:10:46.301 "superblock": true, 00:10:46.301 "num_base_bdevs": 4, 00:10:46.301 "num_base_bdevs_discovered": 4, 00:10:46.301 "num_base_bdevs_operational": 4, 00:10:46.301 "base_bdevs_list": [ 00:10:46.301 { 00:10:46.301 "name": "NewBaseBdev", 00:10:46.301 "uuid": "c3c3156c-253e-4e65-a854-ad4d16a31e81", 00:10:46.301 "is_configured": true, 00:10:46.301 "data_offset": 2048, 00:10:46.301 "data_size": 63488 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "name": "BaseBdev2", 00:10:46.301 "uuid": "487a0df2-a720-4086-ae7b-9c5e6ab866e9", 00:10:46.301 "is_configured": true, 00:10:46.301 "data_offset": 2048, 00:10:46.301 "data_size": 63488 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "name": "BaseBdev3", 00:10:46.301 "uuid": "7b39eb5c-efa1-41a5-93bf-50ab67513f89", 00:10:46.301 "is_configured": true, 00:10:46.301 "data_offset": 2048, 00:10:46.301 "data_size": 63488 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "name": "BaseBdev4", 00:10:46.301 "uuid": "6eb657df-f5ed-4384-bbb1-d504aebf0a7b", 00:10:46.301 "is_configured": true, 00:10:46.301 "data_offset": 2048, 00:10:46.301 "data_size": 63488 00:10:46.301 } 00:10:46.301 ] 00:10:46.301 } 00:10:46.301 } 00:10:46.301 }' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.301 BaseBdev2 00:10:46.301 BaseBdev3 00:10:46.301 BaseBdev4' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.301 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.561 03:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.561 [2024-11-20 03:17:36.046698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.561 [2024-11-20 03:17:36.046780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.561 [2024-11-20 03:17:36.046887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.561 [2024-11-20 03:17:36.046962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.561 [2024-11-20 03:17:36.046974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71796 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71796 ']' 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71796 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.561 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71796 00:10:46.561 killing process with pid 71796 00:10:46.562 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.562 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.562 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71796' 00:10:46.562 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71796 00:10:46.562 [2024-11-20 03:17:36.094121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.562 03:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71796 00:10:47.129 [2024-11-20 03:17:36.493487] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.067 03:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.067 00:10:48.067 real 0m11.708s 00:10:48.067 user 0m18.640s 00:10:48.067 sys 0m2.113s 00:10:48.067 03:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.067 03:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.067 ************************************ 00:10:48.067 END TEST raid_state_function_test_sb 00:10:48.067 ************************************ 00:10:48.067 03:17:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:48.067 03:17:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.067 03:17:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.067 03:17:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.067 ************************************ 00:10:48.067 START TEST raid_superblock_test 00:10:48.067 ************************************ 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72463 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72463 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72463 ']' 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.067 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.326 [2024-11-20 03:17:37.772499] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:48.326 [2024-11-20 03:17:37.772709] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72463 ] 00:10:48.326 [2024-11-20 03:17:37.949043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.593 [2024-11-20 03:17:38.063923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.864 [2024-11-20 03:17:38.268721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.864 [2024-11-20 03:17:38.268781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.123 malloc1 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.123 [2024-11-20 03:17:38.680816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.123 [2024-11-20 03:17:38.680941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.123 [2024-11-20 03:17:38.681007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:49.123 [2024-11-20 03:17:38.681043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.123 [2024-11-20 03:17:38.683255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.123 [2024-11-20 03:17:38.683329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.123 pt1 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.123 malloc2 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.123 [2024-11-20 03:17:38.741549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.123 [2024-11-20 03:17:38.741697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.123 [2024-11-20 03:17:38.741746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:49.123 [2024-11-20 03:17:38.741783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.123 [2024-11-20 03:17:38.744098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.123 [2024-11-20 03:17:38.744170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.123 pt2 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.123 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.382 malloc3 00:10:49.382 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.382 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.382 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.383 [2024-11-20 03:17:38.805471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.383 [2024-11-20 03:17:38.805597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.383 [2024-11-20 03:17:38.805649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:49.383 [2024-11-20 03:17:38.805696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.383 [2024-11-20 03:17:38.808019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.383 [2024-11-20 03:17:38.808104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.383 pt3 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.383 malloc4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.383 [2024-11-20 03:17:38.869137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:49.383 [2024-11-20 03:17:38.869197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.383 [2024-11-20 03:17:38.869221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:49.383 [2024-11-20 03:17:38.869229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.383 [2024-11-20 03:17:38.871596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.383 [2024-11-20 03:17:38.871652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:49.383 pt4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.383 [2024-11-20 03:17:38.881156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.383 [2024-11-20 03:17:38.883123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.383 [2024-11-20 03:17:38.883264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.383 [2024-11-20 03:17:38.883340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:49.383 [2024-11-20 03:17:38.883545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:49.383 [2024-11-20 03:17:38.883558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:49.383 [2024-11-20 03:17:38.883875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:49.383 [2024-11-20 03:17:38.884071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:49.383 [2024-11-20 03:17:38.884086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:49.383 [2024-11-20 03:17:38.884267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.383 "name": "raid_bdev1", 00:10:49.383 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:49.383 "strip_size_kb": 64, 00:10:49.383 "state": "online", 00:10:49.383 "raid_level": "concat", 00:10:49.383 "superblock": true, 00:10:49.383 "num_base_bdevs": 4, 00:10:49.383 "num_base_bdevs_discovered": 4, 00:10:49.383 "num_base_bdevs_operational": 4, 00:10:49.383 "base_bdevs_list": [ 00:10:49.383 { 00:10:49.383 "name": "pt1", 00:10:49.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.383 "is_configured": true, 00:10:49.383 "data_offset": 2048, 00:10:49.383 "data_size": 63488 00:10:49.383 }, 00:10:49.383 { 00:10:49.383 "name": "pt2", 00:10:49.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.383 "is_configured": true, 00:10:49.383 "data_offset": 2048, 00:10:49.383 "data_size": 63488 00:10:49.383 }, 00:10:49.383 { 00:10:49.383 "name": "pt3", 00:10:49.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.383 "is_configured": true, 00:10:49.383 "data_offset": 2048, 00:10:49.383 "data_size": 63488 00:10:49.383 }, 00:10:49.383 { 00:10:49.383 "name": "pt4", 00:10:49.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.383 "is_configured": true, 00:10:49.383 "data_offset": 2048, 00:10:49.383 "data_size": 63488 00:10:49.383 } 00:10:49.383 ] 00:10:49.383 }' 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.383 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.951 [2024-11-20 03:17:39.320764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.951 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.951 "name": "raid_bdev1", 00:10:49.951 "aliases": [ 00:10:49.951 "ca81d8f1-7d9d-498d-bce0-598d6adda5e2" 00:10:49.951 ], 00:10:49.951 "product_name": "Raid Volume", 00:10:49.951 "block_size": 512, 00:10:49.951 "num_blocks": 253952, 00:10:49.951 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:49.951 "assigned_rate_limits": { 00:10:49.951 "rw_ios_per_sec": 0, 00:10:49.951 "rw_mbytes_per_sec": 0, 00:10:49.951 "r_mbytes_per_sec": 0, 00:10:49.951 "w_mbytes_per_sec": 0 00:10:49.951 }, 00:10:49.951 "claimed": false, 00:10:49.951 "zoned": false, 00:10:49.951 "supported_io_types": { 00:10:49.951 "read": true, 00:10:49.951 "write": true, 00:10:49.951 "unmap": true, 00:10:49.951 "flush": true, 00:10:49.951 "reset": true, 00:10:49.951 "nvme_admin": false, 00:10:49.951 "nvme_io": false, 00:10:49.951 "nvme_io_md": false, 00:10:49.951 "write_zeroes": true, 00:10:49.951 "zcopy": false, 00:10:49.951 "get_zone_info": false, 00:10:49.951 "zone_management": false, 00:10:49.951 "zone_append": false, 00:10:49.951 "compare": false, 00:10:49.951 "compare_and_write": false, 00:10:49.951 "abort": false, 00:10:49.951 "seek_hole": false, 00:10:49.951 "seek_data": false, 00:10:49.951 "copy": false, 00:10:49.951 "nvme_iov_md": false 00:10:49.951 }, 00:10:49.951 "memory_domains": [ 00:10:49.951 { 00:10:49.951 "dma_device_id": "system", 00:10:49.951 "dma_device_type": 1 00:10:49.951 }, 00:10:49.951 { 00:10:49.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.952 "dma_device_type": 2 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "dma_device_id": "system", 00:10:49.952 "dma_device_type": 1 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.952 "dma_device_type": 2 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "dma_device_id": "system", 00:10:49.952 "dma_device_type": 1 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.952 "dma_device_type": 2 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "dma_device_id": "system", 00:10:49.952 "dma_device_type": 1 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.952 "dma_device_type": 2 00:10:49.952 } 00:10:49.952 ], 00:10:49.952 "driver_specific": { 00:10:49.952 "raid": { 00:10:49.952 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:49.952 "strip_size_kb": 64, 00:10:49.952 "state": "online", 00:10:49.952 "raid_level": "concat", 00:10:49.952 "superblock": true, 00:10:49.952 "num_base_bdevs": 4, 00:10:49.952 "num_base_bdevs_discovered": 4, 00:10:49.952 "num_base_bdevs_operational": 4, 00:10:49.952 "base_bdevs_list": [ 00:10:49.952 { 00:10:49.952 "name": "pt1", 00:10:49.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.952 "is_configured": true, 00:10:49.952 "data_offset": 2048, 00:10:49.952 "data_size": 63488 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "name": "pt2", 00:10:49.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.952 "is_configured": true, 00:10:49.952 "data_offset": 2048, 00:10:49.952 "data_size": 63488 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "name": "pt3", 00:10:49.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.952 "is_configured": true, 00:10:49.952 "data_offset": 2048, 00:10:49.952 "data_size": 63488 00:10:49.952 }, 00:10:49.952 { 00:10:49.952 "name": "pt4", 00:10:49.952 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.952 "is_configured": true, 00:10:49.952 "data_offset": 2048, 00:10:49.952 "data_size": 63488 00:10:49.952 } 00:10:49.952 ] 00:10:49.952 } 00:10:49.952 } 00:10:49.952 }' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.952 pt2 00:10:49.952 pt3 00:10:49.952 pt4' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.952 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.211 [2024-11-20 03:17:39.672151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ca81d8f1-7d9d-498d-bce0-598d6adda5e2 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ca81d8f1-7d9d-498d-bce0-598d6adda5e2 ']' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.211 [2024-11-20 03:17:39.719750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.211 [2024-11-20 03:17:39.719830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.211 [2024-11-20 03:17:39.719932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.211 [2024-11-20 03:17:39.720023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.211 [2024-11-20 03:17:39.720038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.211 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.212 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.471 [2024-11-20 03:17:39.883488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:50.471 [2024-11-20 03:17:39.885516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:50.471 [2024-11-20 03:17:39.885637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:50.471 [2024-11-20 03:17:39.885702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:50.471 [2024-11-20 03:17:39.885795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:50.471 [2024-11-20 03:17:39.885902] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:50.471 [2024-11-20 03:17:39.885963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:50.471 [2024-11-20 03:17:39.886044] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:50.471 [2024-11-20 03:17:39.886120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.471 [2024-11-20 03:17:39.886163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:50.471 request: 00:10:50.471 { 00:10:50.471 "name": "raid_bdev1", 00:10:50.471 "raid_level": "concat", 00:10:50.471 "base_bdevs": [ 00:10:50.471 "malloc1", 00:10:50.471 "malloc2", 00:10:50.471 "malloc3", 00:10:50.471 "malloc4" 00:10:50.471 ], 00:10:50.471 "strip_size_kb": 64, 00:10:50.471 "superblock": false, 00:10:50.471 "method": "bdev_raid_create", 00:10:50.471 "req_id": 1 00:10:50.471 } 00:10:50.471 Got JSON-RPC error response 00:10:50.471 response: 00:10:50.471 { 00:10:50.471 "code": -17, 00:10:50.471 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:50.471 } 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.471 [2024-11-20 03:17:39.959351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.471 [2024-11-20 03:17:39.959479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.471 [2024-11-20 03:17:39.959521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:50.471 [2024-11-20 03:17:39.959557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.471 [2024-11-20 03:17:39.962021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.471 [2024-11-20 03:17:39.962130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.471 [2024-11-20 03:17:39.962258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:50.471 [2024-11-20 03:17:39.962380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.471 pt1 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.471 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.471 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.471 "name": "raid_bdev1", 00:10:50.471 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:50.471 "strip_size_kb": 64, 00:10:50.471 "state": "configuring", 00:10:50.471 "raid_level": "concat", 00:10:50.471 "superblock": true, 00:10:50.471 "num_base_bdevs": 4, 00:10:50.471 "num_base_bdevs_discovered": 1, 00:10:50.471 "num_base_bdevs_operational": 4, 00:10:50.471 "base_bdevs_list": [ 00:10:50.471 { 00:10:50.471 "name": "pt1", 00:10:50.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.471 "is_configured": true, 00:10:50.471 "data_offset": 2048, 00:10:50.471 "data_size": 63488 00:10:50.471 }, 00:10:50.471 { 00:10:50.471 "name": null, 00:10:50.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.471 "is_configured": false, 00:10:50.471 "data_offset": 2048, 00:10:50.471 "data_size": 63488 00:10:50.471 }, 00:10:50.471 { 00:10:50.471 "name": null, 00:10:50.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.471 "is_configured": false, 00:10:50.471 "data_offset": 2048, 00:10:50.471 "data_size": 63488 00:10:50.471 }, 00:10:50.471 { 00:10:50.471 "name": null, 00:10:50.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.471 "is_configured": false, 00:10:50.471 "data_offset": 2048, 00:10:50.471 "data_size": 63488 00:10:50.471 } 00:10:50.471 ] 00:10:50.471 }' 00:10:50.471 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.471 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.039 [2024-11-20 03:17:40.414583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.039 [2024-11-20 03:17:40.414672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.039 [2024-11-20 03:17:40.414692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:51.039 [2024-11-20 03:17:40.414704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.039 [2024-11-20 03:17:40.415146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.039 [2024-11-20 03:17:40.415168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.039 [2024-11-20 03:17:40.415246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.039 [2024-11-20 03:17:40.415270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.039 pt2 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.039 [2024-11-20 03:17:40.426599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.039 "name": "raid_bdev1", 00:10:51.039 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:51.039 "strip_size_kb": 64, 00:10:51.039 "state": "configuring", 00:10:51.039 "raid_level": "concat", 00:10:51.039 "superblock": true, 00:10:51.039 "num_base_bdevs": 4, 00:10:51.039 "num_base_bdevs_discovered": 1, 00:10:51.039 "num_base_bdevs_operational": 4, 00:10:51.039 "base_bdevs_list": [ 00:10:51.039 { 00:10:51.039 "name": "pt1", 00:10:51.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.039 "is_configured": true, 00:10:51.039 "data_offset": 2048, 00:10:51.039 "data_size": 63488 00:10:51.039 }, 00:10:51.039 { 00:10:51.039 "name": null, 00:10:51.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.039 "is_configured": false, 00:10:51.039 "data_offset": 0, 00:10:51.039 "data_size": 63488 00:10:51.039 }, 00:10:51.039 { 00:10:51.039 "name": null, 00:10:51.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.039 "is_configured": false, 00:10:51.039 "data_offset": 2048, 00:10:51.039 "data_size": 63488 00:10:51.039 }, 00:10:51.039 { 00:10:51.039 "name": null, 00:10:51.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.039 "is_configured": false, 00:10:51.039 "data_offset": 2048, 00:10:51.039 "data_size": 63488 00:10:51.039 } 00:10:51.039 ] 00:10:51.039 }' 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.039 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.298 [2024-11-20 03:17:40.861888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.298 [2024-11-20 03:17:40.862023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.298 [2024-11-20 03:17:40.862068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:51.298 [2024-11-20 03:17:40.862101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.298 [2024-11-20 03:17:40.862631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.298 [2024-11-20 03:17:40.862701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.298 [2024-11-20 03:17:40.862821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.298 [2024-11-20 03:17:40.862878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.298 pt2 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.298 [2024-11-20 03:17:40.873814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.298 [2024-11-20 03:17:40.873925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.298 [2024-11-20 03:17:40.873972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:51.298 [2024-11-20 03:17:40.874013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.298 [2024-11-20 03:17:40.874469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.298 [2024-11-20 03:17:40.874534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.298 [2024-11-20 03:17:40.874653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:51.298 [2024-11-20 03:17:40.874706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.298 pt3 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.298 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.298 [2024-11-20 03:17:40.885775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.298 [2024-11-20 03:17:40.885832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.298 [2024-11-20 03:17:40.885855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:51.298 [2024-11-20 03:17:40.885864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.298 [2024-11-20 03:17:40.886298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.298 [2024-11-20 03:17:40.886315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.298 [2024-11-20 03:17:40.886394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:51.298 [2024-11-20 03:17:40.886414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.298 [2024-11-20 03:17:40.886575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.298 [2024-11-20 03:17:40.886585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.299 [2024-11-20 03:17:40.886864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:51.299 [2024-11-20 03:17:40.887043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.299 [2024-11-20 03:17:40.887057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:51.299 [2024-11-20 03:17:40.887208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.299 pt4 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.299 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.558 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.558 "name": "raid_bdev1", 00:10:51.558 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:51.558 "strip_size_kb": 64, 00:10:51.558 "state": "online", 00:10:51.558 "raid_level": "concat", 00:10:51.558 "superblock": true, 00:10:51.558 "num_base_bdevs": 4, 00:10:51.558 "num_base_bdevs_discovered": 4, 00:10:51.558 "num_base_bdevs_operational": 4, 00:10:51.558 "base_bdevs_list": [ 00:10:51.558 { 00:10:51.558 "name": "pt1", 00:10:51.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.558 "is_configured": true, 00:10:51.558 "data_offset": 2048, 00:10:51.558 "data_size": 63488 00:10:51.558 }, 00:10:51.558 { 00:10:51.558 "name": "pt2", 00:10:51.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.558 "is_configured": true, 00:10:51.558 "data_offset": 2048, 00:10:51.558 "data_size": 63488 00:10:51.558 }, 00:10:51.558 { 00:10:51.558 "name": "pt3", 00:10:51.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.558 "is_configured": true, 00:10:51.558 "data_offset": 2048, 00:10:51.558 "data_size": 63488 00:10:51.558 }, 00:10:51.558 { 00:10:51.558 "name": "pt4", 00:10:51.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.558 "is_configured": true, 00:10:51.558 "data_offset": 2048, 00:10:51.558 "data_size": 63488 00:10:51.558 } 00:10:51.558 ] 00:10:51.558 }' 00:10:51.558 03:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.558 03:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.817 [2024-11-20 03:17:41.377282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.817 "name": "raid_bdev1", 00:10:51.817 "aliases": [ 00:10:51.817 "ca81d8f1-7d9d-498d-bce0-598d6adda5e2" 00:10:51.817 ], 00:10:51.817 "product_name": "Raid Volume", 00:10:51.817 "block_size": 512, 00:10:51.817 "num_blocks": 253952, 00:10:51.817 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:51.817 "assigned_rate_limits": { 00:10:51.817 "rw_ios_per_sec": 0, 00:10:51.817 "rw_mbytes_per_sec": 0, 00:10:51.817 "r_mbytes_per_sec": 0, 00:10:51.817 "w_mbytes_per_sec": 0 00:10:51.817 }, 00:10:51.817 "claimed": false, 00:10:51.817 "zoned": false, 00:10:51.817 "supported_io_types": { 00:10:51.817 "read": true, 00:10:51.817 "write": true, 00:10:51.817 "unmap": true, 00:10:51.817 "flush": true, 00:10:51.817 "reset": true, 00:10:51.817 "nvme_admin": false, 00:10:51.817 "nvme_io": false, 00:10:51.817 "nvme_io_md": false, 00:10:51.817 "write_zeroes": true, 00:10:51.817 "zcopy": false, 00:10:51.817 "get_zone_info": false, 00:10:51.817 "zone_management": false, 00:10:51.817 "zone_append": false, 00:10:51.817 "compare": false, 00:10:51.817 "compare_and_write": false, 00:10:51.817 "abort": false, 00:10:51.817 "seek_hole": false, 00:10:51.817 "seek_data": false, 00:10:51.817 "copy": false, 00:10:51.817 "nvme_iov_md": false 00:10:51.817 }, 00:10:51.817 "memory_domains": [ 00:10:51.817 { 00:10:51.817 "dma_device_id": "system", 00:10:51.817 "dma_device_type": 1 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.817 "dma_device_type": 2 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "system", 00:10:51.817 "dma_device_type": 1 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.817 "dma_device_type": 2 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "system", 00:10:51.817 "dma_device_type": 1 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.817 "dma_device_type": 2 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "system", 00:10:51.817 "dma_device_type": 1 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.817 "dma_device_type": 2 00:10:51.817 } 00:10:51.817 ], 00:10:51.817 "driver_specific": { 00:10:51.817 "raid": { 00:10:51.817 "uuid": "ca81d8f1-7d9d-498d-bce0-598d6adda5e2", 00:10:51.817 "strip_size_kb": 64, 00:10:51.817 "state": "online", 00:10:51.817 "raid_level": "concat", 00:10:51.817 "superblock": true, 00:10:51.817 "num_base_bdevs": 4, 00:10:51.817 "num_base_bdevs_discovered": 4, 00:10:51.817 "num_base_bdevs_operational": 4, 00:10:51.817 "base_bdevs_list": [ 00:10:51.817 { 00:10:51.817 "name": "pt1", 00:10:51.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.817 "is_configured": true, 00:10:51.817 "data_offset": 2048, 00:10:51.817 "data_size": 63488 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "name": "pt2", 00:10:51.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.817 "is_configured": true, 00:10:51.817 "data_offset": 2048, 00:10:51.817 "data_size": 63488 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "name": "pt3", 00:10:51.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.817 "is_configured": true, 00:10:51.817 "data_offset": 2048, 00:10:51.817 "data_size": 63488 00:10:51.817 }, 00:10:51.817 { 00:10:51.817 "name": "pt4", 00:10:51.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.817 "is_configured": true, 00:10:51.817 "data_offset": 2048, 00:10:51.817 "data_size": 63488 00:10:51.817 } 00:10:51.817 ] 00:10:51.817 } 00:10:51.817 } 00:10:51.817 }' 00:10:51.817 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.077 pt2 00:10:52.077 pt3 00:10:52.077 pt4' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.077 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.337 [2024-11-20 03:17:41.708728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ca81d8f1-7d9d-498d-bce0-598d6adda5e2 '!=' ca81d8f1-7d9d-498d-bce0-598d6adda5e2 ']' 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72463 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72463 ']' 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72463 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72463 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72463' 00:10:52.337 killing process with pid 72463 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72463 00:10:52.337 [2024-11-20 03:17:41.796320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.337 [2024-11-20 03:17:41.796473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.337 03:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72463 00:10:52.337 [2024-11-20 03:17:41.796576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.337 [2024-11-20 03:17:41.796587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:52.597 [2024-11-20 03:17:42.205395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.978 03:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:53.978 00:10:53.978 real 0m5.640s 00:10:53.978 user 0m8.125s 00:10:53.978 sys 0m0.935s 00:10:53.978 03:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.978 03:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.978 ************************************ 00:10:53.978 END TEST raid_superblock_test 00:10:53.978 ************************************ 00:10:53.978 03:17:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:53.978 03:17:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.978 03:17:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.978 03:17:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.978 ************************************ 00:10:53.978 START TEST raid_read_error_test 00:10:53.978 ************************************ 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.978 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VzyjCykz1V 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72734 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72734 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72734 ']' 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.979 03:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.979 [2024-11-20 03:17:43.498593] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:53.979 [2024-11-20 03:17:43.498738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72734 ] 00:10:54.238 [2024-11-20 03:17:43.662215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.238 [2024-11-20 03:17:43.775497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.498 [2024-11-20 03:17:43.979210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.498 [2024-11-20 03:17:43.979242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.757 BaseBdev1_malloc 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.757 true 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.757 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.017 [2024-11-20 03:17:44.391794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.017 [2024-11-20 03:17:44.391854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.017 [2024-11-20 03:17:44.391876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.017 [2024-11-20 03:17:44.391889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.017 [2024-11-20 03:17:44.394164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.017 [2024-11-20 03:17:44.394206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.017 BaseBdev1 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.017 BaseBdev2_malloc 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.017 true 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.017 [2024-11-20 03:17:44.460059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.017 [2024-11-20 03:17:44.460113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.017 [2024-11-20 03:17:44.460129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.017 [2024-11-20 03:17:44.460139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.017 [2024-11-20 03:17:44.462172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.017 [2024-11-20 03:17:44.462277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.017 BaseBdev2 00:10:55.017 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 BaseBdev3_malloc 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 true 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 [2024-11-20 03:17:44.538595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.018 [2024-11-20 03:17:44.538663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.018 [2024-11-20 03:17:44.538683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.018 [2024-11-20 03:17:44.538694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.018 [2024-11-20 03:17:44.540879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.018 [2024-11-20 03:17:44.540953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.018 BaseBdev3 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 BaseBdev4_malloc 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 true 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 [2024-11-20 03:17:44.603978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.018 [2024-11-20 03:17:44.604093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.018 [2024-11-20 03:17:44.604119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.018 [2024-11-20 03:17:44.604131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.018 [2024-11-20 03:17:44.606410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.018 [2024-11-20 03:17:44.606463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.018 BaseBdev4 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.018 [2024-11-20 03:17:44.616019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.018 [2024-11-20 03:17:44.617817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.018 [2024-11-20 03:17:44.617891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.018 [2024-11-20 03:17:44.617956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.018 [2024-11-20 03:17:44.618172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:55.018 [2024-11-20 03:17:44.618186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.018 [2024-11-20 03:17:44.618468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:55.018 [2024-11-20 03:17:44.618642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:55.018 [2024-11-20 03:17:44.618655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:55.018 [2024-11-20 03:17:44.618817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.018 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.277 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.277 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.277 "name": "raid_bdev1", 00:10:55.277 "uuid": "f4cfa146-459e-4eec-ba8b-ee91c41d7895", 00:10:55.277 "strip_size_kb": 64, 00:10:55.277 "state": "online", 00:10:55.277 "raid_level": "concat", 00:10:55.277 "superblock": true, 00:10:55.277 "num_base_bdevs": 4, 00:10:55.277 "num_base_bdevs_discovered": 4, 00:10:55.277 "num_base_bdevs_operational": 4, 00:10:55.277 "base_bdevs_list": [ 00:10:55.277 { 00:10:55.277 "name": "BaseBdev1", 00:10:55.277 "uuid": "3a8a4700-9ed0-5e40-b5b0-db8725609a2b", 00:10:55.277 "is_configured": true, 00:10:55.277 "data_offset": 2048, 00:10:55.277 "data_size": 63488 00:10:55.277 }, 00:10:55.277 { 00:10:55.277 "name": "BaseBdev2", 00:10:55.277 "uuid": "5e5c9181-a71b-5e6c-9b9c-f08a67f98117", 00:10:55.277 "is_configured": true, 00:10:55.277 "data_offset": 2048, 00:10:55.277 "data_size": 63488 00:10:55.277 }, 00:10:55.277 { 00:10:55.277 "name": "BaseBdev3", 00:10:55.277 "uuid": "fb7b7e3c-393d-54bf-a2e6-71ee703a09af", 00:10:55.277 "is_configured": true, 00:10:55.277 "data_offset": 2048, 00:10:55.277 "data_size": 63488 00:10:55.277 }, 00:10:55.277 { 00:10:55.277 "name": "BaseBdev4", 00:10:55.277 "uuid": "6be7103e-0cd7-545b-bf50-65b42990b827", 00:10:55.277 "is_configured": true, 00:10:55.277 "data_offset": 2048, 00:10:55.277 "data_size": 63488 00:10:55.277 } 00:10:55.277 ] 00:10:55.277 }' 00:10:55.277 03:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.277 03:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.537 03:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.537 03:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.796 [2024-11-20 03:17:45.184336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.733 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.734 "name": "raid_bdev1", 00:10:56.734 "uuid": "f4cfa146-459e-4eec-ba8b-ee91c41d7895", 00:10:56.734 "strip_size_kb": 64, 00:10:56.734 "state": "online", 00:10:56.734 "raid_level": "concat", 00:10:56.734 "superblock": true, 00:10:56.734 "num_base_bdevs": 4, 00:10:56.734 "num_base_bdevs_discovered": 4, 00:10:56.734 "num_base_bdevs_operational": 4, 00:10:56.734 "base_bdevs_list": [ 00:10:56.734 { 00:10:56.734 "name": "BaseBdev1", 00:10:56.734 "uuid": "3a8a4700-9ed0-5e40-b5b0-db8725609a2b", 00:10:56.734 "is_configured": true, 00:10:56.734 "data_offset": 2048, 00:10:56.734 "data_size": 63488 00:10:56.734 }, 00:10:56.734 { 00:10:56.734 "name": "BaseBdev2", 00:10:56.734 "uuid": "5e5c9181-a71b-5e6c-9b9c-f08a67f98117", 00:10:56.734 "is_configured": true, 00:10:56.734 "data_offset": 2048, 00:10:56.734 "data_size": 63488 00:10:56.734 }, 00:10:56.734 { 00:10:56.734 "name": "BaseBdev3", 00:10:56.734 "uuid": "fb7b7e3c-393d-54bf-a2e6-71ee703a09af", 00:10:56.734 "is_configured": true, 00:10:56.734 "data_offset": 2048, 00:10:56.734 "data_size": 63488 00:10:56.734 }, 00:10:56.734 { 00:10:56.734 "name": "BaseBdev4", 00:10:56.734 "uuid": "6be7103e-0cd7-545b-bf50-65b42990b827", 00:10:56.734 "is_configured": true, 00:10:56.734 "data_offset": 2048, 00:10:56.734 "data_size": 63488 00:10:56.734 } 00:10:56.734 ] 00:10:56.734 }' 00:10:56.734 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.734 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.993 [2024-11-20 03:17:46.572544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.993 [2024-11-20 03:17:46.572589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.993 [2024-11-20 03:17:46.575358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.993 [2024-11-20 03:17:46.575427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.993 [2024-11-20 03:17:46.575476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.993 [2024-11-20 03:17:46.575493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:56.993 { 00:10:56.993 "results": [ 00:10:56.993 { 00:10:56.993 "job": "raid_bdev1", 00:10:56.993 "core_mask": "0x1", 00:10:56.993 "workload": "randrw", 00:10:56.993 "percentage": 50, 00:10:56.993 "status": "finished", 00:10:56.993 "queue_depth": 1, 00:10:56.993 "io_size": 131072, 00:10:56.993 "runtime": 1.388913, 00:10:56.993 "iops": 15553.169996968853, 00:10:56.993 "mibps": 1944.1462496211066, 00:10:56.993 "io_failed": 1, 00:10:56.993 "io_timeout": 0, 00:10:56.993 "avg_latency_us": 89.38560182992536, 00:10:56.993 "min_latency_us": 26.270742358078603, 00:10:56.993 "max_latency_us": 1502.46288209607 00:10:56.993 } 00:10:56.993 ], 00:10:56.993 "core_count": 1 00:10:56.993 } 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72734 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72734 ']' 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72734 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72734 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72734' 00:10:56.993 killing process with pid 72734 00:10:56.993 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72734 00:10:56.994 [2024-11-20 03:17:46.608512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.994 03:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72734 00:10:57.563 [2024-11-20 03:17:46.949827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VzyjCykz1V 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:58.533 ************************************ 00:10:58.533 END TEST raid_read_error_test 00:10:58.533 ************************************ 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:58.533 00:10:58.533 real 0m4.774s 00:10:58.533 user 0m5.676s 00:10:58.533 sys 0m0.560s 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.533 03:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.792 03:17:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:58.792 03:17:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.792 03:17:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.792 03:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.792 ************************************ 00:10:58.792 START TEST raid_write_error_test 00:10:58.792 ************************************ 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0sP1KX30JU 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72882 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72882 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72882 ']' 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.792 03:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.792 [2024-11-20 03:17:48.341531] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:58.792 [2024-11-20 03:17:48.341664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72882 ] 00:10:59.052 [2024-11-20 03:17:48.495833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.052 [2024-11-20 03:17:48.609071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.311 [2024-11-20 03:17:48.830051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.311 [2024-11-20 03:17:48.830089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.570 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.570 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.570 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.570 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.570 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.570 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 BaseBdev1_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 true 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 [2024-11-20 03:17:49.240842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.829 [2024-11-20 03:17:49.240899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.829 [2024-11-20 03:17:49.240918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.829 [2024-11-20 03:17:49.240929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.829 [2024-11-20 03:17:49.243114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.829 [2024-11-20 03:17:49.243158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.829 BaseBdev1 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 BaseBdev2_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 true 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 [2024-11-20 03:17:49.297539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.829 [2024-11-20 03:17:49.297593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.829 [2024-11-20 03:17:49.297624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.829 [2024-11-20 03:17:49.297636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.829 [2024-11-20 03:17:49.299756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.829 [2024-11-20 03:17:49.299793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.829 BaseBdev2 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 BaseBdev3_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 true 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 [2024-11-20 03:17:49.367269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.829 [2024-11-20 03:17:49.367321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.829 [2024-11-20 03:17:49.367354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.829 [2024-11-20 03:17:49.367366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.829 [2024-11-20 03:17:49.369503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.829 [2024-11-20 03:17:49.369545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.829 BaseBdev3 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 BaseBdev4_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 true 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 [2024-11-20 03:17:49.422881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:59.829 [2024-11-20 03:17:49.422945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.829 [2024-11-20 03:17:49.422980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:59.829 [2024-11-20 03:17:49.422991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.829 [2024-11-20 03:17:49.425199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.829 [2024-11-20 03:17:49.425296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:59.829 BaseBdev4 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.829 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.829 [2024-11-20 03:17:49.430923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.829 [2024-11-20 03:17:49.432725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.829 [2024-11-20 03:17:49.432799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.829 [2024-11-20 03:17:49.432865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.829 [2024-11-20 03:17:49.433086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:59.829 [2024-11-20 03:17:49.433100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.829 [2024-11-20 03:17:49.433330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:59.829 [2024-11-20 03:17:49.433488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:59.829 [2024-11-20 03:17:49.433499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:59.830 [2024-11-20 03:17:49.433661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.830 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.089 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.089 "name": "raid_bdev1", 00:11:00.089 "uuid": "a330d977-82bd-4dec-9ecf-c03709da04e3", 00:11:00.089 "strip_size_kb": 64, 00:11:00.089 "state": "online", 00:11:00.089 "raid_level": "concat", 00:11:00.089 "superblock": true, 00:11:00.089 "num_base_bdevs": 4, 00:11:00.089 "num_base_bdevs_discovered": 4, 00:11:00.089 "num_base_bdevs_operational": 4, 00:11:00.089 "base_bdevs_list": [ 00:11:00.089 { 00:11:00.089 "name": "BaseBdev1", 00:11:00.089 "uuid": "67b8ccd5-17af-571b-9e10-d6c19fe57434", 00:11:00.089 "is_configured": true, 00:11:00.089 "data_offset": 2048, 00:11:00.089 "data_size": 63488 00:11:00.089 }, 00:11:00.089 { 00:11:00.089 "name": "BaseBdev2", 00:11:00.089 "uuid": "e2edf552-ec64-5a88-b506-fa93b7f67c19", 00:11:00.089 "is_configured": true, 00:11:00.089 "data_offset": 2048, 00:11:00.089 "data_size": 63488 00:11:00.089 }, 00:11:00.089 { 00:11:00.089 "name": "BaseBdev3", 00:11:00.089 "uuid": "c47a01dd-61f4-5be1-9c65-43261d4d2f39", 00:11:00.089 "is_configured": true, 00:11:00.089 "data_offset": 2048, 00:11:00.089 "data_size": 63488 00:11:00.089 }, 00:11:00.089 { 00:11:00.089 "name": "BaseBdev4", 00:11:00.089 "uuid": "93d8f303-cf10-5f4c-bd48-4d0b3b780180", 00:11:00.089 "is_configured": true, 00:11:00.089 "data_offset": 2048, 00:11:00.089 "data_size": 63488 00:11:00.089 } 00:11:00.089 ] 00:11:00.089 }' 00:11:00.089 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.089 03:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.347 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:00.347 03:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:00.607 [2024-11-20 03:17:49.991326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.545 "name": "raid_bdev1", 00:11:01.545 "uuid": "a330d977-82bd-4dec-9ecf-c03709da04e3", 00:11:01.545 "strip_size_kb": 64, 00:11:01.545 "state": "online", 00:11:01.545 "raid_level": "concat", 00:11:01.545 "superblock": true, 00:11:01.545 "num_base_bdevs": 4, 00:11:01.545 "num_base_bdevs_discovered": 4, 00:11:01.545 "num_base_bdevs_operational": 4, 00:11:01.545 "base_bdevs_list": [ 00:11:01.545 { 00:11:01.545 "name": "BaseBdev1", 00:11:01.545 "uuid": "67b8ccd5-17af-571b-9e10-d6c19fe57434", 00:11:01.545 "is_configured": true, 00:11:01.545 "data_offset": 2048, 00:11:01.545 "data_size": 63488 00:11:01.545 }, 00:11:01.545 { 00:11:01.545 "name": "BaseBdev2", 00:11:01.545 "uuid": "e2edf552-ec64-5a88-b506-fa93b7f67c19", 00:11:01.545 "is_configured": true, 00:11:01.545 "data_offset": 2048, 00:11:01.545 "data_size": 63488 00:11:01.545 }, 00:11:01.545 { 00:11:01.545 "name": "BaseBdev3", 00:11:01.545 "uuid": "c47a01dd-61f4-5be1-9c65-43261d4d2f39", 00:11:01.545 "is_configured": true, 00:11:01.545 "data_offset": 2048, 00:11:01.545 "data_size": 63488 00:11:01.545 }, 00:11:01.545 { 00:11:01.545 "name": "BaseBdev4", 00:11:01.545 "uuid": "93d8f303-cf10-5f4c-bd48-4d0b3b780180", 00:11:01.545 "is_configured": true, 00:11:01.545 "data_offset": 2048, 00:11:01.545 "data_size": 63488 00:11:01.545 } 00:11:01.545 ] 00:11:01.545 }' 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.545 03:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.805 [2024-11-20 03:17:51.359654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.805 [2024-11-20 03:17:51.359786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.805 [2024-11-20 03:17:51.362871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.805 [2024-11-20 03:17:51.362979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.805 [2024-11-20 03:17:51.363033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.805 [2024-11-20 03:17:51.363049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:01.805 { 00:11:01.805 "results": [ 00:11:01.805 { 00:11:01.805 "job": "raid_bdev1", 00:11:01.805 "core_mask": "0x1", 00:11:01.805 "workload": "randrw", 00:11:01.805 "percentage": 50, 00:11:01.805 "status": "finished", 00:11:01.805 "queue_depth": 1, 00:11:01.805 "io_size": 131072, 00:11:01.805 "runtime": 1.369098, 00:11:01.805 "iops": 14952.180194551449, 00:11:01.805 "mibps": 1869.0225243189311, 00:11:01.805 "io_failed": 1, 00:11:01.805 "io_timeout": 0, 00:11:01.805 "avg_latency_us": 92.92665973846907, 00:11:01.805 "min_latency_us": 26.829694323144103, 00:11:01.805 "max_latency_us": 1609.7816593886462 00:11:01.805 } 00:11:01.805 ], 00:11:01.805 "core_count": 1 00:11:01.805 } 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72882 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72882 ']' 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72882 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72882 00:11:01.805 killing process with pid 72882 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72882' 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72882 00:11:01.805 [2024-11-20 03:17:51.398549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.805 03:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72882 00:11:02.375 [2024-11-20 03:17:51.738515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0sP1KX30JU 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:03.754 ************************************ 00:11:03.754 END TEST raid_write_error_test 00:11:03.754 ************************************ 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:03.754 00:11:03.754 real 0m4.723s 00:11:03.754 user 0m5.612s 00:11:03.754 sys 0m0.566s 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.754 03:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.754 03:17:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:03.754 03:17:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:03.754 03:17:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.754 03:17:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.754 03:17:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.754 ************************************ 00:11:03.754 START TEST raid_state_function_test 00:11:03.754 ************************************ 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.754 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73020 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73020' 00:11:03.755 Process raid pid: 73020 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73020 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73020 ']' 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.755 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.755 [2024-11-20 03:17:53.130441] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:03.755 [2024-11-20 03:17:53.130685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.755 [2024-11-20 03:17:53.307057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.015 [2024-11-20 03:17:53.424202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.015 [2024-11-20 03:17:53.631160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.015 [2024-11-20 03:17:53.631278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.584 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.584 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.584 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.584 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.584 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.584 [2024-11-20 03:17:54.002555] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.584 [2024-11-20 03:17:54.002666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.584 [2024-11-20 03:17:54.002703] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.584 [2024-11-20 03:17:54.002718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.584 [2024-11-20 03:17:54.002725] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.584 [2024-11-20 03:17:54.002734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.584 [2024-11-20 03:17:54.002741] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.584 [2024-11-20 03:17:54.002749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.584 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.584 "name": "Existed_Raid", 00:11:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.584 "strip_size_kb": 0, 00:11:04.584 "state": "configuring", 00:11:04.584 "raid_level": "raid1", 00:11:04.584 "superblock": false, 00:11:04.584 "num_base_bdevs": 4, 00:11:04.584 "num_base_bdevs_discovered": 0, 00:11:04.584 "num_base_bdevs_operational": 4, 00:11:04.584 "base_bdevs_list": [ 00:11:04.584 { 00:11:04.584 "name": "BaseBdev1", 00:11:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.584 "is_configured": false, 00:11:04.584 "data_offset": 0, 00:11:04.584 "data_size": 0 00:11:04.584 }, 00:11:04.584 { 00:11:04.584 "name": "BaseBdev2", 00:11:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.584 "is_configured": false, 00:11:04.584 "data_offset": 0, 00:11:04.584 "data_size": 0 00:11:04.584 }, 00:11:04.584 { 00:11:04.584 "name": "BaseBdev3", 00:11:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.584 "is_configured": false, 00:11:04.584 "data_offset": 0, 00:11:04.584 "data_size": 0 00:11:04.584 }, 00:11:04.584 { 00:11:04.584 "name": "BaseBdev4", 00:11:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.584 "is_configured": false, 00:11:04.584 "data_offset": 0, 00:11:04.584 "data_size": 0 00:11:04.584 } 00:11:04.584 ] 00:11:04.585 }' 00:11:04.585 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.585 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 [2024-11-20 03:17:54.413800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.844 [2024-11-20 03:17:54.413907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 [2024-11-20 03:17:54.425768] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.844 [2024-11-20 03:17:54.425867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.844 [2024-11-20 03:17:54.425895] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.844 [2024-11-20 03:17:54.425919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.844 [2024-11-20 03:17:54.425937] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.844 [2024-11-20 03:17:54.425959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.844 [2024-11-20 03:17:54.425989] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.844 [2024-11-20 03:17:54.426009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 [2024-11-20 03:17:54.469574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.844 BaseBdev1 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.844 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.103 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.103 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.103 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.103 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.103 [ 00:11:05.103 { 00:11:05.103 "name": "BaseBdev1", 00:11:05.103 "aliases": [ 00:11:05.103 "6180c81c-5613-4074-a1ab-00ea4497381f" 00:11:05.103 ], 00:11:05.103 "product_name": "Malloc disk", 00:11:05.103 "block_size": 512, 00:11:05.103 "num_blocks": 65536, 00:11:05.103 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:05.103 "assigned_rate_limits": { 00:11:05.103 "rw_ios_per_sec": 0, 00:11:05.103 "rw_mbytes_per_sec": 0, 00:11:05.103 "r_mbytes_per_sec": 0, 00:11:05.103 "w_mbytes_per_sec": 0 00:11:05.103 }, 00:11:05.103 "claimed": true, 00:11:05.103 "claim_type": "exclusive_write", 00:11:05.103 "zoned": false, 00:11:05.103 "supported_io_types": { 00:11:05.103 "read": true, 00:11:05.103 "write": true, 00:11:05.103 "unmap": true, 00:11:05.104 "flush": true, 00:11:05.104 "reset": true, 00:11:05.104 "nvme_admin": false, 00:11:05.104 "nvme_io": false, 00:11:05.104 "nvme_io_md": false, 00:11:05.104 "write_zeroes": true, 00:11:05.104 "zcopy": true, 00:11:05.104 "get_zone_info": false, 00:11:05.104 "zone_management": false, 00:11:05.104 "zone_append": false, 00:11:05.104 "compare": false, 00:11:05.104 "compare_and_write": false, 00:11:05.104 "abort": true, 00:11:05.104 "seek_hole": false, 00:11:05.104 "seek_data": false, 00:11:05.104 "copy": true, 00:11:05.104 "nvme_iov_md": false 00:11:05.104 }, 00:11:05.104 "memory_domains": [ 00:11:05.104 { 00:11:05.104 "dma_device_id": "system", 00:11:05.104 "dma_device_type": 1 00:11:05.104 }, 00:11:05.104 { 00:11:05.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.104 "dma_device_type": 2 00:11:05.104 } 00:11:05.104 ], 00:11:05.104 "driver_specific": {} 00:11:05.104 } 00:11:05.104 ] 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.104 "name": "Existed_Raid", 00:11:05.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.104 "strip_size_kb": 0, 00:11:05.104 "state": "configuring", 00:11:05.104 "raid_level": "raid1", 00:11:05.104 "superblock": false, 00:11:05.104 "num_base_bdevs": 4, 00:11:05.104 "num_base_bdevs_discovered": 1, 00:11:05.104 "num_base_bdevs_operational": 4, 00:11:05.104 "base_bdevs_list": [ 00:11:05.104 { 00:11:05.104 "name": "BaseBdev1", 00:11:05.104 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:05.104 "is_configured": true, 00:11:05.104 "data_offset": 0, 00:11:05.104 "data_size": 65536 00:11:05.104 }, 00:11:05.104 { 00:11:05.104 "name": "BaseBdev2", 00:11:05.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.104 "is_configured": false, 00:11:05.104 "data_offset": 0, 00:11:05.104 "data_size": 0 00:11:05.104 }, 00:11:05.104 { 00:11:05.104 "name": "BaseBdev3", 00:11:05.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.104 "is_configured": false, 00:11:05.104 "data_offset": 0, 00:11:05.104 "data_size": 0 00:11:05.104 }, 00:11:05.104 { 00:11:05.104 "name": "BaseBdev4", 00:11:05.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.104 "is_configured": false, 00:11:05.104 "data_offset": 0, 00:11:05.104 "data_size": 0 00:11:05.104 } 00:11:05.104 ] 00:11:05.104 }' 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.104 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.364 [2024-11-20 03:17:54.928827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.364 [2024-11-20 03:17:54.928887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.364 [2024-11-20 03:17:54.940876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.364 [2024-11-20 03:17:54.942802] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.364 [2024-11-20 03:17:54.942900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.364 [2024-11-20 03:17:54.942916] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.364 [2024-11-20 03:17:54.942930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.364 [2024-11-20 03:17:54.942937] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.364 [2024-11-20 03:17:54.942948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.364 "name": "Existed_Raid", 00:11:05.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.364 "strip_size_kb": 0, 00:11:05.364 "state": "configuring", 00:11:05.364 "raid_level": "raid1", 00:11:05.364 "superblock": false, 00:11:05.364 "num_base_bdevs": 4, 00:11:05.364 "num_base_bdevs_discovered": 1, 00:11:05.364 "num_base_bdevs_operational": 4, 00:11:05.364 "base_bdevs_list": [ 00:11:05.364 { 00:11:05.364 "name": "BaseBdev1", 00:11:05.364 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:05.364 "is_configured": true, 00:11:05.364 "data_offset": 0, 00:11:05.364 "data_size": 65536 00:11:05.364 }, 00:11:05.364 { 00:11:05.364 "name": "BaseBdev2", 00:11:05.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.364 "is_configured": false, 00:11:05.364 "data_offset": 0, 00:11:05.364 "data_size": 0 00:11:05.364 }, 00:11:05.364 { 00:11:05.364 "name": "BaseBdev3", 00:11:05.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.364 "is_configured": false, 00:11:05.364 "data_offset": 0, 00:11:05.364 "data_size": 0 00:11:05.364 }, 00:11:05.364 { 00:11:05.364 "name": "BaseBdev4", 00:11:05.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.364 "is_configured": false, 00:11:05.364 "data_offset": 0, 00:11:05.364 "data_size": 0 00:11:05.364 } 00:11:05.364 ] 00:11:05.364 }' 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.364 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.933 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.933 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.933 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.933 [2024-11-20 03:17:55.427198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.933 BaseBdev2 00:11:05.933 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.934 [ 00:11:05.934 { 00:11:05.934 "name": "BaseBdev2", 00:11:05.934 "aliases": [ 00:11:05.934 "0fa59519-013e-466c-99ec-7d28f38ec917" 00:11:05.934 ], 00:11:05.934 "product_name": "Malloc disk", 00:11:05.934 "block_size": 512, 00:11:05.934 "num_blocks": 65536, 00:11:05.934 "uuid": "0fa59519-013e-466c-99ec-7d28f38ec917", 00:11:05.934 "assigned_rate_limits": { 00:11:05.934 "rw_ios_per_sec": 0, 00:11:05.934 "rw_mbytes_per_sec": 0, 00:11:05.934 "r_mbytes_per_sec": 0, 00:11:05.934 "w_mbytes_per_sec": 0 00:11:05.934 }, 00:11:05.934 "claimed": true, 00:11:05.934 "claim_type": "exclusive_write", 00:11:05.934 "zoned": false, 00:11:05.934 "supported_io_types": { 00:11:05.934 "read": true, 00:11:05.934 "write": true, 00:11:05.934 "unmap": true, 00:11:05.934 "flush": true, 00:11:05.934 "reset": true, 00:11:05.934 "nvme_admin": false, 00:11:05.934 "nvme_io": false, 00:11:05.934 "nvme_io_md": false, 00:11:05.934 "write_zeroes": true, 00:11:05.934 "zcopy": true, 00:11:05.934 "get_zone_info": false, 00:11:05.934 "zone_management": false, 00:11:05.934 "zone_append": false, 00:11:05.934 "compare": false, 00:11:05.934 "compare_and_write": false, 00:11:05.934 "abort": true, 00:11:05.934 "seek_hole": false, 00:11:05.934 "seek_data": false, 00:11:05.934 "copy": true, 00:11:05.934 "nvme_iov_md": false 00:11:05.934 }, 00:11:05.934 "memory_domains": [ 00:11:05.934 { 00:11:05.934 "dma_device_id": "system", 00:11:05.934 "dma_device_type": 1 00:11:05.934 }, 00:11:05.934 { 00:11:05.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.934 "dma_device_type": 2 00:11:05.934 } 00:11:05.934 ], 00:11:05.934 "driver_specific": {} 00:11:05.934 } 00:11:05.934 ] 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.934 "name": "Existed_Raid", 00:11:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.934 "strip_size_kb": 0, 00:11:05.934 "state": "configuring", 00:11:05.934 "raid_level": "raid1", 00:11:05.934 "superblock": false, 00:11:05.934 "num_base_bdevs": 4, 00:11:05.934 "num_base_bdevs_discovered": 2, 00:11:05.934 "num_base_bdevs_operational": 4, 00:11:05.934 "base_bdevs_list": [ 00:11:05.934 { 00:11:05.934 "name": "BaseBdev1", 00:11:05.934 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:05.934 "is_configured": true, 00:11:05.934 "data_offset": 0, 00:11:05.934 "data_size": 65536 00:11:05.934 }, 00:11:05.934 { 00:11:05.934 "name": "BaseBdev2", 00:11:05.934 "uuid": "0fa59519-013e-466c-99ec-7d28f38ec917", 00:11:05.934 "is_configured": true, 00:11:05.934 "data_offset": 0, 00:11:05.934 "data_size": 65536 00:11:05.934 }, 00:11:05.934 { 00:11:05.934 "name": "BaseBdev3", 00:11:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.934 "is_configured": false, 00:11:05.934 "data_offset": 0, 00:11:05.934 "data_size": 0 00:11:05.934 }, 00:11:05.934 { 00:11:05.934 "name": "BaseBdev4", 00:11:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.934 "is_configured": false, 00:11:05.934 "data_offset": 0, 00:11:05.934 "data_size": 0 00:11:05.934 } 00:11:05.934 ] 00:11:05.934 }' 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.934 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.503 [2024-11-20 03:17:55.923752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.503 BaseBdev3 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.503 [ 00:11:06.503 { 00:11:06.503 "name": "BaseBdev3", 00:11:06.503 "aliases": [ 00:11:06.503 "894b7676-9342-442a-9652-a30266797054" 00:11:06.503 ], 00:11:06.503 "product_name": "Malloc disk", 00:11:06.503 "block_size": 512, 00:11:06.503 "num_blocks": 65536, 00:11:06.503 "uuid": "894b7676-9342-442a-9652-a30266797054", 00:11:06.503 "assigned_rate_limits": { 00:11:06.503 "rw_ios_per_sec": 0, 00:11:06.503 "rw_mbytes_per_sec": 0, 00:11:06.503 "r_mbytes_per_sec": 0, 00:11:06.503 "w_mbytes_per_sec": 0 00:11:06.503 }, 00:11:06.503 "claimed": true, 00:11:06.503 "claim_type": "exclusive_write", 00:11:06.503 "zoned": false, 00:11:06.503 "supported_io_types": { 00:11:06.503 "read": true, 00:11:06.503 "write": true, 00:11:06.503 "unmap": true, 00:11:06.503 "flush": true, 00:11:06.503 "reset": true, 00:11:06.503 "nvme_admin": false, 00:11:06.503 "nvme_io": false, 00:11:06.503 "nvme_io_md": false, 00:11:06.503 "write_zeroes": true, 00:11:06.503 "zcopy": true, 00:11:06.503 "get_zone_info": false, 00:11:06.503 "zone_management": false, 00:11:06.503 "zone_append": false, 00:11:06.503 "compare": false, 00:11:06.503 "compare_and_write": false, 00:11:06.503 "abort": true, 00:11:06.503 "seek_hole": false, 00:11:06.503 "seek_data": false, 00:11:06.503 "copy": true, 00:11:06.503 "nvme_iov_md": false 00:11:06.503 }, 00:11:06.503 "memory_domains": [ 00:11:06.503 { 00:11:06.503 "dma_device_id": "system", 00:11:06.503 "dma_device_type": 1 00:11:06.503 }, 00:11:06.503 { 00:11:06.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.503 "dma_device_type": 2 00:11:06.503 } 00:11:06.503 ], 00:11:06.503 "driver_specific": {} 00:11:06.503 } 00:11:06.503 ] 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.503 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.504 03:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.504 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.504 "name": "Existed_Raid", 00:11:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.504 "strip_size_kb": 0, 00:11:06.504 "state": "configuring", 00:11:06.504 "raid_level": "raid1", 00:11:06.504 "superblock": false, 00:11:06.504 "num_base_bdevs": 4, 00:11:06.504 "num_base_bdevs_discovered": 3, 00:11:06.504 "num_base_bdevs_operational": 4, 00:11:06.504 "base_bdevs_list": [ 00:11:06.504 { 00:11:06.504 "name": "BaseBdev1", 00:11:06.504 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:06.504 "is_configured": true, 00:11:06.504 "data_offset": 0, 00:11:06.504 "data_size": 65536 00:11:06.504 }, 00:11:06.504 { 00:11:06.504 "name": "BaseBdev2", 00:11:06.504 "uuid": "0fa59519-013e-466c-99ec-7d28f38ec917", 00:11:06.504 "is_configured": true, 00:11:06.504 "data_offset": 0, 00:11:06.504 "data_size": 65536 00:11:06.504 }, 00:11:06.504 { 00:11:06.504 "name": "BaseBdev3", 00:11:06.504 "uuid": "894b7676-9342-442a-9652-a30266797054", 00:11:06.504 "is_configured": true, 00:11:06.504 "data_offset": 0, 00:11:06.504 "data_size": 65536 00:11:06.504 }, 00:11:06.504 { 00:11:06.504 "name": "BaseBdev4", 00:11:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.504 "is_configured": false, 00:11:06.504 "data_offset": 0, 00:11:06.504 "data_size": 0 00:11:06.504 } 00:11:06.504 ] 00:11:06.504 }' 00:11:06.504 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.504 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.073 [2024-11-20 03:17:56.451613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.073 [2024-11-20 03:17:56.451749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.073 [2024-11-20 03:17:56.451777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:07.073 [2024-11-20 03:17:56.452089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.073 [2024-11-20 03:17:56.452305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.073 [2024-11-20 03:17:56.452355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.073 [2024-11-20 03:17:56.452690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.073 BaseBdev4 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.073 [ 00:11:07.073 { 00:11:07.073 "name": "BaseBdev4", 00:11:07.073 "aliases": [ 00:11:07.073 "ec26f18b-f659-4af6-9e54-b9fc81b90f4f" 00:11:07.073 ], 00:11:07.073 "product_name": "Malloc disk", 00:11:07.073 "block_size": 512, 00:11:07.073 "num_blocks": 65536, 00:11:07.073 "uuid": "ec26f18b-f659-4af6-9e54-b9fc81b90f4f", 00:11:07.073 "assigned_rate_limits": { 00:11:07.073 "rw_ios_per_sec": 0, 00:11:07.073 "rw_mbytes_per_sec": 0, 00:11:07.073 "r_mbytes_per_sec": 0, 00:11:07.073 "w_mbytes_per_sec": 0 00:11:07.073 }, 00:11:07.073 "claimed": true, 00:11:07.073 "claim_type": "exclusive_write", 00:11:07.073 "zoned": false, 00:11:07.073 "supported_io_types": { 00:11:07.073 "read": true, 00:11:07.073 "write": true, 00:11:07.073 "unmap": true, 00:11:07.073 "flush": true, 00:11:07.073 "reset": true, 00:11:07.073 "nvme_admin": false, 00:11:07.073 "nvme_io": false, 00:11:07.073 "nvme_io_md": false, 00:11:07.073 "write_zeroes": true, 00:11:07.073 "zcopy": true, 00:11:07.073 "get_zone_info": false, 00:11:07.073 "zone_management": false, 00:11:07.073 "zone_append": false, 00:11:07.073 "compare": false, 00:11:07.073 "compare_and_write": false, 00:11:07.073 "abort": true, 00:11:07.073 "seek_hole": false, 00:11:07.073 "seek_data": false, 00:11:07.073 "copy": true, 00:11:07.073 "nvme_iov_md": false 00:11:07.073 }, 00:11:07.073 "memory_domains": [ 00:11:07.073 { 00:11:07.073 "dma_device_id": "system", 00:11:07.073 "dma_device_type": 1 00:11:07.073 }, 00:11:07.073 { 00:11:07.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.073 "dma_device_type": 2 00:11:07.073 } 00:11:07.073 ], 00:11:07.073 "driver_specific": {} 00:11:07.073 } 00:11:07.073 ] 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.073 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.073 "name": "Existed_Raid", 00:11:07.073 "uuid": "43f4640d-0aa9-46da-a367-55b17f27aa1b", 00:11:07.073 "strip_size_kb": 0, 00:11:07.073 "state": "online", 00:11:07.073 "raid_level": "raid1", 00:11:07.073 "superblock": false, 00:11:07.073 "num_base_bdevs": 4, 00:11:07.073 "num_base_bdevs_discovered": 4, 00:11:07.073 "num_base_bdevs_operational": 4, 00:11:07.073 "base_bdevs_list": [ 00:11:07.073 { 00:11:07.073 "name": "BaseBdev1", 00:11:07.073 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:07.073 "is_configured": true, 00:11:07.073 "data_offset": 0, 00:11:07.073 "data_size": 65536 00:11:07.073 }, 00:11:07.073 { 00:11:07.073 "name": "BaseBdev2", 00:11:07.073 "uuid": "0fa59519-013e-466c-99ec-7d28f38ec917", 00:11:07.073 "is_configured": true, 00:11:07.073 "data_offset": 0, 00:11:07.073 "data_size": 65536 00:11:07.073 }, 00:11:07.073 { 00:11:07.073 "name": "BaseBdev3", 00:11:07.073 "uuid": "894b7676-9342-442a-9652-a30266797054", 00:11:07.073 "is_configured": true, 00:11:07.073 "data_offset": 0, 00:11:07.073 "data_size": 65536 00:11:07.073 }, 00:11:07.073 { 00:11:07.074 "name": "BaseBdev4", 00:11:07.074 "uuid": "ec26f18b-f659-4af6-9e54-b9fc81b90f4f", 00:11:07.074 "is_configured": true, 00:11:07.074 "data_offset": 0, 00:11:07.074 "data_size": 65536 00:11:07.074 } 00:11:07.074 ] 00:11:07.074 }' 00:11:07.074 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.074 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.334 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 [2024-11-20 03:17:56.955204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.595 03:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.595 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.595 "name": "Existed_Raid", 00:11:07.595 "aliases": [ 00:11:07.595 "43f4640d-0aa9-46da-a367-55b17f27aa1b" 00:11:07.595 ], 00:11:07.595 "product_name": "Raid Volume", 00:11:07.595 "block_size": 512, 00:11:07.595 "num_blocks": 65536, 00:11:07.595 "uuid": "43f4640d-0aa9-46da-a367-55b17f27aa1b", 00:11:07.595 "assigned_rate_limits": { 00:11:07.595 "rw_ios_per_sec": 0, 00:11:07.595 "rw_mbytes_per_sec": 0, 00:11:07.595 "r_mbytes_per_sec": 0, 00:11:07.595 "w_mbytes_per_sec": 0 00:11:07.595 }, 00:11:07.595 "claimed": false, 00:11:07.595 "zoned": false, 00:11:07.595 "supported_io_types": { 00:11:07.595 "read": true, 00:11:07.595 "write": true, 00:11:07.595 "unmap": false, 00:11:07.595 "flush": false, 00:11:07.595 "reset": true, 00:11:07.595 "nvme_admin": false, 00:11:07.595 "nvme_io": false, 00:11:07.595 "nvme_io_md": false, 00:11:07.595 "write_zeroes": true, 00:11:07.595 "zcopy": false, 00:11:07.595 "get_zone_info": false, 00:11:07.595 "zone_management": false, 00:11:07.595 "zone_append": false, 00:11:07.595 "compare": false, 00:11:07.595 "compare_and_write": false, 00:11:07.595 "abort": false, 00:11:07.595 "seek_hole": false, 00:11:07.595 "seek_data": false, 00:11:07.595 "copy": false, 00:11:07.595 "nvme_iov_md": false 00:11:07.595 }, 00:11:07.595 "memory_domains": [ 00:11:07.595 { 00:11:07.595 "dma_device_id": "system", 00:11:07.595 "dma_device_type": 1 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.595 "dma_device_type": 2 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "system", 00:11:07.595 "dma_device_type": 1 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.595 "dma_device_type": 2 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "system", 00:11:07.595 "dma_device_type": 1 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.595 "dma_device_type": 2 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "system", 00:11:07.595 "dma_device_type": 1 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.595 "dma_device_type": 2 00:11:07.595 } 00:11:07.595 ], 00:11:07.595 "driver_specific": { 00:11:07.595 "raid": { 00:11:07.595 "uuid": "43f4640d-0aa9-46da-a367-55b17f27aa1b", 00:11:07.595 "strip_size_kb": 0, 00:11:07.595 "state": "online", 00:11:07.595 "raid_level": "raid1", 00:11:07.595 "superblock": false, 00:11:07.595 "num_base_bdevs": 4, 00:11:07.595 "num_base_bdevs_discovered": 4, 00:11:07.595 "num_base_bdevs_operational": 4, 00:11:07.595 "base_bdevs_list": [ 00:11:07.595 { 00:11:07.595 "name": "BaseBdev1", 00:11:07.595 "uuid": "6180c81c-5613-4074-a1ab-00ea4497381f", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "name": "BaseBdev2", 00:11:07.595 "uuid": "0fa59519-013e-466c-99ec-7d28f38ec917", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "name": "BaseBdev3", 00:11:07.595 "uuid": "894b7676-9342-442a-9652-a30266797054", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "name": "BaseBdev4", 00:11:07.595 "uuid": "ec26f18b-f659-4af6-9e54-b9fc81b90f4f", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 } 00:11:07.595 ] 00:11:07.595 } 00:11:07.595 } 00:11:07.595 }' 00:11:07.595 03:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.595 BaseBdev2 00:11:07.595 BaseBdev3 00:11:07.595 BaseBdev4' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.595 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.855 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.856 [2024-11-20 03:17:57.286476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.856 "name": "Existed_Raid", 00:11:07.856 "uuid": "43f4640d-0aa9-46da-a367-55b17f27aa1b", 00:11:07.856 "strip_size_kb": 0, 00:11:07.856 "state": "online", 00:11:07.856 "raid_level": "raid1", 00:11:07.856 "superblock": false, 00:11:07.856 "num_base_bdevs": 4, 00:11:07.856 "num_base_bdevs_discovered": 3, 00:11:07.856 "num_base_bdevs_operational": 3, 00:11:07.856 "base_bdevs_list": [ 00:11:07.856 { 00:11:07.856 "name": null, 00:11:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.856 "is_configured": false, 00:11:07.856 "data_offset": 0, 00:11:07.856 "data_size": 65536 00:11:07.856 }, 00:11:07.856 { 00:11:07.856 "name": "BaseBdev2", 00:11:07.856 "uuid": "0fa59519-013e-466c-99ec-7d28f38ec917", 00:11:07.856 "is_configured": true, 00:11:07.856 "data_offset": 0, 00:11:07.856 "data_size": 65536 00:11:07.856 }, 00:11:07.856 { 00:11:07.856 "name": "BaseBdev3", 00:11:07.856 "uuid": "894b7676-9342-442a-9652-a30266797054", 00:11:07.856 "is_configured": true, 00:11:07.856 "data_offset": 0, 00:11:07.856 "data_size": 65536 00:11:07.856 }, 00:11:07.856 { 00:11:07.856 "name": "BaseBdev4", 00:11:07.856 "uuid": "ec26f18b-f659-4af6-9e54-b9fc81b90f4f", 00:11:07.856 "is_configured": true, 00:11:07.856 "data_offset": 0, 00:11:07.856 "data_size": 65536 00:11:07.856 } 00:11:07.856 ] 00:11:07.856 }' 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.856 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.478 [2024-11-20 03:17:57.824532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.478 03:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.478 [2024-11-20 03:17:57.983581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.478 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.739 [2024-11-20 03:17:58.138567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.739 [2024-11-20 03:17:58.138757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.739 [2024-11-20 03:17:58.238125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.739 [2024-11-20 03:17:58.238183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.739 [2024-11-20 03:17:58.238196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.739 BaseBdev2 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.739 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.740 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.740 [ 00:11:08.740 { 00:11:08.740 "name": "BaseBdev2", 00:11:08.740 "aliases": [ 00:11:08.740 "a75a2246-38ee-423a-82f4-5cc85a8ef75b" 00:11:08.740 ], 00:11:08.740 "product_name": "Malloc disk", 00:11:08.740 "block_size": 512, 00:11:08.740 "num_blocks": 65536, 00:11:08.740 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:08.740 "assigned_rate_limits": { 00:11:08.740 "rw_ios_per_sec": 0, 00:11:08.740 "rw_mbytes_per_sec": 0, 00:11:08.740 "r_mbytes_per_sec": 0, 00:11:08.740 "w_mbytes_per_sec": 0 00:11:08.740 }, 00:11:08.740 "claimed": false, 00:11:08.740 "zoned": false, 00:11:08.740 "supported_io_types": { 00:11:08.740 "read": true, 00:11:08.740 "write": true, 00:11:08.740 "unmap": true, 00:11:08.740 "flush": true, 00:11:08.740 "reset": true, 00:11:08.740 "nvme_admin": false, 00:11:08.740 "nvme_io": false, 00:11:08.740 "nvme_io_md": false, 00:11:08.740 "write_zeroes": true, 00:11:08.740 "zcopy": true, 00:11:08.740 "get_zone_info": false, 00:11:08.740 "zone_management": false, 00:11:08.740 "zone_append": false, 00:11:08.740 "compare": false, 00:11:08.740 "compare_and_write": false, 00:11:08.740 "abort": true, 00:11:09.002 "seek_hole": false, 00:11:09.002 "seek_data": false, 00:11:09.002 "copy": true, 00:11:09.002 "nvme_iov_md": false 00:11:09.002 }, 00:11:09.002 "memory_domains": [ 00:11:09.002 { 00:11:09.002 "dma_device_id": "system", 00:11:09.002 "dma_device_type": 1 00:11:09.002 }, 00:11:09.002 { 00:11:09.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.002 "dma_device_type": 2 00:11:09.002 } 00:11:09.002 ], 00:11:09.002 "driver_specific": {} 00:11:09.002 } 00:11:09.002 ] 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.002 BaseBdev3 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.002 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.002 [ 00:11:09.002 { 00:11:09.002 "name": "BaseBdev3", 00:11:09.002 "aliases": [ 00:11:09.002 "745b2142-8ebf-4f53-98eb-4e4fca3f6b09" 00:11:09.002 ], 00:11:09.002 "product_name": "Malloc disk", 00:11:09.002 "block_size": 512, 00:11:09.002 "num_blocks": 65536, 00:11:09.002 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:09.002 "assigned_rate_limits": { 00:11:09.002 "rw_ios_per_sec": 0, 00:11:09.002 "rw_mbytes_per_sec": 0, 00:11:09.002 "r_mbytes_per_sec": 0, 00:11:09.002 "w_mbytes_per_sec": 0 00:11:09.002 }, 00:11:09.002 "claimed": false, 00:11:09.002 "zoned": false, 00:11:09.002 "supported_io_types": { 00:11:09.002 "read": true, 00:11:09.002 "write": true, 00:11:09.002 "unmap": true, 00:11:09.002 "flush": true, 00:11:09.002 "reset": true, 00:11:09.002 "nvme_admin": false, 00:11:09.002 "nvme_io": false, 00:11:09.002 "nvme_io_md": false, 00:11:09.002 "write_zeroes": true, 00:11:09.002 "zcopy": true, 00:11:09.002 "get_zone_info": false, 00:11:09.002 "zone_management": false, 00:11:09.002 "zone_append": false, 00:11:09.002 "compare": false, 00:11:09.003 "compare_and_write": false, 00:11:09.003 "abort": true, 00:11:09.003 "seek_hole": false, 00:11:09.003 "seek_data": false, 00:11:09.003 "copy": true, 00:11:09.003 "nvme_iov_md": false 00:11:09.003 }, 00:11:09.003 "memory_domains": [ 00:11:09.003 { 00:11:09.003 "dma_device_id": "system", 00:11:09.003 "dma_device_type": 1 00:11:09.003 }, 00:11:09.003 { 00:11:09.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.003 "dma_device_type": 2 00:11:09.003 } 00:11:09.003 ], 00:11:09.003 "driver_specific": {} 00:11:09.003 } 00:11:09.003 ] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.003 BaseBdev4 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.003 [ 00:11:09.003 { 00:11:09.003 "name": "BaseBdev4", 00:11:09.003 "aliases": [ 00:11:09.003 "106be013-dc4a-48c9-82a9-2c0183fc87d7" 00:11:09.003 ], 00:11:09.003 "product_name": "Malloc disk", 00:11:09.003 "block_size": 512, 00:11:09.003 "num_blocks": 65536, 00:11:09.003 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:09.003 "assigned_rate_limits": { 00:11:09.003 "rw_ios_per_sec": 0, 00:11:09.003 "rw_mbytes_per_sec": 0, 00:11:09.003 "r_mbytes_per_sec": 0, 00:11:09.003 "w_mbytes_per_sec": 0 00:11:09.003 }, 00:11:09.003 "claimed": false, 00:11:09.003 "zoned": false, 00:11:09.003 "supported_io_types": { 00:11:09.003 "read": true, 00:11:09.003 "write": true, 00:11:09.003 "unmap": true, 00:11:09.003 "flush": true, 00:11:09.003 "reset": true, 00:11:09.003 "nvme_admin": false, 00:11:09.003 "nvme_io": false, 00:11:09.003 "nvme_io_md": false, 00:11:09.003 "write_zeroes": true, 00:11:09.003 "zcopy": true, 00:11:09.003 "get_zone_info": false, 00:11:09.003 "zone_management": false, 00:11:09.003 "zone_append": false, 00:11:09.003 "compare": false, 00:11:09.003 "compare_and_write": false, 00:11:09.003 "abort": true, 00:11:09.003 "seek_hole": false, 00:11:09.003 "seek_data": false, 00:11:09.003 "copy": true, 00:11:09.003 "nvme_iov_md": false 00:11:09.003 }, 00:11:09.003 "memory_domains": [ 00:11:09.003 { 00:11:09.003 "dma_device_id": "system", 00:11:09.003 "dma_device_type": 1 00:11:09.003 }, 00:11:09.003 { 00:11:09.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.003 "dma_device_type": 2 00:11:09.003 } 00:11:09.003 ], 00:11:09.003 "driver_specific": {} 00:11:09.003 } 00:11:09.003 ] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.003 [2024-11-20 03:17:58.544356] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.003 [2024-11-20 03:17:58.544459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.003 [2024-11-20 03:17:58.544504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.003 [2024-11-20 03:17:58.546479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.003 [2024-11-20 03:17:58.546569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.003 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.003 "name": "Existed_Raid", 00:11:09.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.003 "strip_size_kb": 0, 00:11:09.003 "state": "configuring", 00:11:09.003 "raid_level": "raid1", 00:11:09.003 "superblock": false, 00:11:09.003 "num_base_bdevs": 4, 00:11:09.003 "num_base_bdevs_discovered": 3, 00:11:09.003 "num_base_bdevs_operational": 4, 00:11:09.003 "base_bdevs_list": [ 00:11:09.003 { 00:11:09.003 "name": "BaseBdev1", 00:11:09.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.003 "is_configured": false, 00:11:09.003 "data_offset": 0, 00:11:09.003 "data_size": 0 00:11:09.003 }, 00:11:09.003 { 00:11:09.003 "name": "BaseBdev2", 00:11:09.003 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:09.003 "is_configured": true, 00:11:09.003 "data_offset": 0, 00:11:09.003 "data_size": 65536 00:11:09.003 }, 00:11:09.003 { 00:11:09.003 "name": "BaseBdev3", 00:11:09.003 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:09.003 "is_configured": true, 00:11:09.003 "data_offset": 0, 00:11:09.004 "data_size": 65536 00:11:09.004 }, 00:11:09.004 { 00:11:09.004 "name": "BaseBdev4", 00:11:09.004 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:09.004 "is_configured": true, 00:11:09.004 "data_offset": 0, 00:11:09.004 "data_size": 65536 00:11:09.004 } 00:11:09.004 ] 00:11:09.004 }' 00:11:09.004 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.004 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 [2024-11-20 03:17:58.935729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.575 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.576 "name": "Existed_Raid", 00:11:09.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.576 "strip_size_kb": 0, 00:11:09.576 "state": "configuring", 00:11:09.576 "raid_level": "raid1", 00:11:09.576 "superblock": false, 00:11:09.576 "num_base_bdevs": 4, 00:11:09.576 "num_base_bdevs_discovered": 2, 00:11:09.576 "num_base_bdevs_operational": 4, 00:11:09.576 "base_bdevs_list": [ 00:11:09.576 { 00:11:09.576 "name": "BaseBdev1", 00:11:09.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.576 "is_configured": false, 00:11:09.576 "data_offset": 0, 00:11:09.576 "data_size": 0 00:11:09.576 }, 00:11:09.576 { 00:11:09.576 "name": null, 00:11:09.576 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:09.576 "is_configured": false, 00:11:09.576 "data_offset": 0, 00:11:09.576 "data_size": 65536 00:11:09.576 }, 00:11:09.576 { 00:11:09.576 "name": "BaseBdev3", 00:11:09.576 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:09.576 "is_configured": true, 00:11:09.576 "data_offset": 0, 00:11:09.576 "data_size": 65536 00:11:09.576 }, 00:11:09.576 { 00:11:09.576 "name": "BaseBdev4", 00:11:09.576 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:09.576 "is_configured": true, 00:11:09.576 "data_offset": 0, 00:11:09.576 "data_size": 65536 00:11:09.576 } 00:11:09.576 ] 00:11:09.576 }' 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.576 03:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.836 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.837 [2024-11-20 03:17:59.453232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.837 BaseBdev1 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.837 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.096 [ 00:11:10.096 { 00:11:10.096 "name": "BaseBdev1", 00:11:10.096 "aliases": [ 00:11:10.096 "9e5c4061-2388-4b00-bccf-27c25ed328e7" 00:11:10.096 ], 00:11:10.096 "product_name": "Malloc disk", 00:11:10.096 "block_size": 512, 00:11:10.096 "num_blocks": 65536, 00:11:10.096 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:10.096 "assigned_rate_limits": { 00:11:10.096 "rw_ios_per_sec": 0, 00:11:10.096 "rw_mbytes_per_sec": 0, 00:11:10.096 "r_mbytes_per_sec": 0, 00:11:10.096 "w_mbytes_per_sec": 0 00:11:10.096 }, 00:11:10.096 "claimed": true, 00:11:10.096 "claim_type": "exclusive_write", 00:11:10.096 "zoned": false, 00:11:10.096 "supported_io_types": { 00:11:10.096 "read": true, 00:11:10.096 "write": true, 00:11:10.096 "unmap": true, 00:11:10.096 "flush": true, 00:11:10.096 "reset": true, 00:11:10.096 "nvme_admin": false, 00:11:10.096 "nvme_io": false, 00:11:10.096 "nvme_io_md": false, 00:11:10.096 "write_zeroes": true, 00:11:10.096 "zcopy": true, 00:11:10.096 "get_zone_info": false, 00:11:10.096 "zone_management": false, 00:11:10.096 "zone_append": false, 00:11:10.096 "compare": false, 00:11:10.096 "compare_and_write": false, 00:11:10.096 "abort": true, 00:11:10.096 "seek_hole": false, 00:11:10.096 "seek_data": false, 00:11:10.096 "copy": true, 00:11:10.096 "nvme_iov_md": false 00:11:10.096 }, 00:11:10.096 "memory_domains": [ 00:11:10.096 { 00:11:10.096 "dma_device_id": "system", 00:11:10.096 "dma_device_type": 1 00:11:10.096 }, 00:11:10.096 { 00:11:10.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.096 "dma_device_type": 2 00:11:10.096 } 00:11:10.096 ], 00:11:10.096 "driver_specific": {} 00:11:10.096 } 00:11:10.096 ] 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.096 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.096 "name": "Existed_Raid", 00:11:10.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.096 "strip_size_kb": 0, 00:11:10.096 "state": "configuring", 00:11:10.096 "raid_level": "raid1", 00:11:10.096 "superblock": false, 00:11:10.096 "num_base_bdevs": 4, 00:11:10.096 "num_base_bdevs_discovered": 3, 00:11:10.096 "num_base_bdevs_operational": 4, 00:11:10.096 "base_bdevs_list": [ 00:11:10.096 { 00:11:10.096 "name": "BaseBdev1", 00:11:10.096 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:10.096 "is_configured": true, 00:11:10.096 "data_offset": 0, 00:11:10.096 "data_size": 65536 00:11:10.096 }, 00:11:10.096 { 00:11:10.096 "name": null, 00:11:10.097 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:10.097 "is_configured": false, 00:11:10.097 "data_offset": 0, 00:11:10.097 "data_size": 65536 00:11:10.097 }, 00:11:10.097 { 00:11:10.097 "name": "BaseBdev3", 00:11:10.097 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:10.097 "is_configured": true, 00:11:10.097 "data_offset": 0, 00:11:10.097 "data_size": 65536 00:11:10.097 }, 00:11:10.097 { 00:11:10.097 "name": "BaseBdev4", 00:11:10.097 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:10.097 "is_configured": true, 00:11:10.097 "data_offset": 0, 00:11:10.097 "data_size": 65536 00:11:10.097 } 00:11:10.097 ] 00:11:10.097 }' 00:11:10.097 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.097 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.356 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.356 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.356 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.356 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.356 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.357 [2024-11-20 03:17:59.972440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.357 03:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.618 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.618 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.618 "name": "Existed_Raid", 00:11:10.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.618 "strip_size_kb": 0, 00:11:10.618 "state": "configuring", 00:11:10.618 "raid_level": "raid1", 00:11:10.618 "superblock": false, 00:11:10.618 "num_base_bdevs": 4, 00:11:10.618 "num_base_bdevs_discovered": 2, 00:11:10.618 "num_base_bdevs_operational": 4, 00:11:10.618 "base_bdevs_list": [ 00:11:10.618 { 00:11:10.618 "name": "BaseBdev1", 00:11:10.618 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:10.618 "is_configured": true, 00:11:10.618 "data_offset": 0, 00:11:10.618 "data_size": 65536 00:11:10.618 }, 00:11:10.618 { 00:11:10.618 "name": null, 00:11:10.618 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:10.618 "is_configured": false, 00:11:10.618 "data_offset": 0, 00:11:10.618 "data_size": 65536 00:11:10.618 }, 00:11:10.618 { 00:11:10.618 "name": null, 00:11:10.618 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:10.618 "is_configured": false, 00:11:10.618 "data_offset": 0, 00:11:10.618 "data_size": 65536 00:11:10.618 }, 00:11:10.618 { 00:11:10.618 "name": "BaseBdev4", 00:11:10.618 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:10.618 "is_configured": true, 00:11:10.618 "data_offset": 0, 00:11:10.618 "data_size": 65536 00:11:10.618 } 00:11:10.618 ] 00:11:10.618 }' 00:11:10.618 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.618 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.878 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.879 [2024-11-20 03:18:00.447659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.879 "name": "Existed_Raid", 00:11:10.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.879 "strip_size_kb": 0, 00:11:10.879 "state": "configuring", 00:11:10.879 "raid_level": "raid1", 00:11:10.879 "superblock": false, 00:11:10.879 "num_base_bdevs": 4, 00:11:10.879 "num_base_bdevs_discovered": 3, 00:11:10.879 "num_base_bdevs_operational": 4, 00:11:10.879 "base_bdevs_list": [ 00:11:10.879 { 00:11:10.879 "name": "BaseBdev1", 00:11:10.879 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:10.879 "is_configured": true, 00:11:10.879 "data_offset": 0, 00:11:10.879 "data_size": 65536 00:11:10.879 }, 00:11:10.879 { 00:11:10.879 "name": null, 00:11:10.879 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:10.879 "is_configured": false, 00:11:10.879 "data_offset": 0, 00:11:10.879 "data_size": 65536 00:11:10.879 }, 00:11:10.879 { 00:11:10.879 "name": "BaseBdev3", 00:11:10.879 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:10.879 "is_configured": true, 00:11:10.879 "data_offset": 0, 00:11:10.879 "data_size": 65536 00:11:10.879 }, 00:11:10.879 { 00:11:10.879 "name": "BaseBdev4", 00:11:10.879 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:10.879 "is_configured": true, 00:11:10.879 "data_offset": 0, 00:11:10.879 "data_size": 65536 00:11:10.879 } 00:11:10.879 ] 00:11:10.879 }' 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.879 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.450 03:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.450 [2024-11-20 03:18:00.906927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.450 "name": "Existed_Raid", 00:11:11.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.450 "strip_size_kb": 0, 00:11:11.450 "state": "configuring", 00:11:11.450 "raid_level": "raid1", 00:11:11.450 "superblock": false, 00:11:11.450 "num_base_bdevs": 4, 00:11:11.450 "num_base_bdevs_discovered": 2, 00:11:11.450 "num_base_bdevs_operational": 4, 00:11:11.450 "base_bdevs_list": [ 00:11:11.450 { 00:11:11.450 "name": null, 00:11:11.450 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:11.450 "is_configured": false, 00:11:11.450 "data_offset": 0, 00:11:11.450 "data_size": 65536 00:11:11.450 }, 00:11:11.450 { 00:11:11.450 "name": null, 00:11:11.450 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:11.450 "is_configured": false, 00:11:11.450 "data_offset": 0, 00:11:11.450 "data_size": 65536 00:11:11.450 }, 00:11:11.450 { 00:11:11.450 "name": "BaseBdev3", 00:11:11.450 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:11.450 "is_configured": true, 00:11:11.450 "data_offset": 0, 00:11:11.450 "data_size": 65536 00:11:11.450 }, 00:11:11.450 { 00:11:11.450 "name": "BaseBdev4", 00:11:11.450 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:11.450 "is_configured": true, 00:11:11.450 "data_offset": 0, 00:11:11.450 "data_size": 65536 00:11:11.450 } 00:11:11.450 ] 00:11:11.450 }' 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.450 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.020 [2024-11-20 03:18:01.476166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.020 "name": "Existed_Raid", 00:11:12.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.020 "strip_size_kb": 0, 00:11:12.020 "state": "configuring", 00:11:12.020 "raid_level": "raid1", 00:11:12.020 "superblock": false, 00:11:12.020 "num_base_bdevs": 4, 00:11:12.020 "num_base_bdevs_discovered": 3, 00:11:12.020 "num_base_bdevs_operational": 4, 00:11:12.020 "base_bdevs_list": [ 00:11:12.020 { 00:11:12.020 "name": null, 00:11:12.020 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:12.020 "is_configured": false, 00:11:12.020 "data_offset": 0, 00:11:12.020 "data_size": 65536 00:11:12.020 }, 00:11:12.020 { 00:11:12.020 "name": "BaseBdev2", 00:11:12.020 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:12.020 "is_configured": true, 00:11:12.020 "data_offset": 0, 00:11:12.020 "data_size": 65536 00:11:12.020 }, 00:11:12.020 { 00:11:12.020 "name": "BaseBdev3", 00:11:12.020 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:12.020 "is_configured": true, 00:11:12.020 "data_offset": 0, 00:11:12.020 "data_size": 65536 00:11:12.020 }, 00:11:12.020 { 00:11:12.020 "name": "BaseBdev4", 00:11:12.020 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:12.020 "is_configured": true, 00:11:12.020 "data_offset": 0, 00:11:12.020 "data_size": 65536 00:11:12.020 } 00:11:12.020 ] 00:11:12.020 }' 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.020 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.280 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.280 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.280 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.280 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.280 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.280 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9e5c4061-2388-4b00-bccf-27c25ed328e7 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.540 [2024-11-20 03:18:01.987430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.540 [2024-11-20 03:18:01.987532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.540 [2024-11-20 03:18:01.987560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:12.540 [2024-11-20 03:18:01.987877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.540 [2024-11-20 03:18:01.988080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.540 [2024-11-20 03:18:01.988124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:12.540 [2024-11-20 03:18:01.988401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.540 NewBaseBdev 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.540 03:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.540 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.541 [ 00:11:12.541 { 00:11:12.541 "name": "NewBaseBdev", 00:11:12.541 "aliases": [ 00:11:12.541 "9e5c4061-2388-4b00-bccf-27c25ed328e7" 00:11:12.541 ], 00:11:12.541 "product_name": "Malloc disk", 00:11:12.541 "block_size": 512, 00:11:12.541 "num_blocks": 65536, 00:11:12.541 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:12.541 "assigned_rate_limits": { 00:11:12.541 "rw_ios_per_sec": 0, 00:11:12.541 "rw_mbytes_per_sec": 0, 00:11:12.541 "r_mbytes_per_sec": 0, 00:11:12.541 "w_mbytes_per_sec": 0 00:11:12.541 }, 00:11:12.541 "claimed": true, 00:11:12.541 "claim_type": "exclusive_write", 00:11:12.541 "zoned": false, 00:11:12.541 "supported_io_types": { 00:11:12.541 "read": true, 00:11:12.541 "write": true, 00:11:12.541 "unmap": true, 00:11:12.541 "flush": true, 00:11:12.541 "reset": true, 00:11:12.541 "nvme_admin": false, 00:11:12.541 "nvme_io": false, 00:11:12.541 "nvme_io_md": false, 00:11:12.541 "write_zeroes": true, 00:11:12.541 "zcopy": true, 00:11:12.541 "get_zone_info": false, 00:11:12.541 "zone_management": false, 00:11:12.541 "zone_append": false, 00:11:12.541 "compare": false, 00:11:12.541 "compare_and_write": false, 00:11:12.541 "abort": true, 00:11:12.541 "seek_hole": false, 00:11:12.541 "seek_data": false, 00:11:12.541 "copy": true, 00:11:12.541 "nvme_iov_md": false 00:11:12.541 }, 00:11:12.541 "memory_domains": [ 00:11:12.541 { 00:11:12.541 "dma_device_id": "system", 00:11:12.541 "dma_device_type": 1 00:11:12.541 }, 00:11:12.541 { 00:11:12.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.541 "dma_device_type": 2 00:11:12.541 } 00:11:12.541 ], 00:11:12.541 "driver_specific": {} 00:11:12.541 } 00:11:12.541 ] 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.541 "name": "Existed_Raid", 00:11:12.541 "uuid": "26859f33-7df9-4ec0-adb6-35e0c04e4a49", 00:11:12.541 "strip_size_kb": 0, 00:11:12.541 "state": "online", 00:11:12.541 "raid_level": "raid1", 00:11:12.541 "superblock": false, 00:11:12.541 "num_base_bdevs": 4, 00:11:12.541 "num_base_bdevs_discovered": 4, 00:11:12.541 "num_base_bdevs_operational": 4, 00:11:12.541 "base_bdevs_list": [ 00:11:12.541 { 00:11:12.541 "name": "NewBaseBdev", 00:11:12.541 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:12.541 "is_configured": true, 00:11:12.541 "data_offset": 0, 00:11:12.541 "data_size": 65536 00:11:12.541 }, 00:11:12.541 { 00:11:12.541 "name": "BaseBdev2", 00:11:12.541 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:12.541 "is_configured": true, 00:11:12.541 "data_offset": 0, 00:11:12.541 "data_size": 65536 00:11:12.541 }, 00:11:12.541 { 00:11:12.541 "name": "BaseBdev3", 00:11:12.541 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:12.541 "is_configured": true, 00:11:12.541 "data_offset": 0, 00:11:12.541 "data_size": 65536 00:11:12.541 }, 00:11:12.541 { 00:11:12.541 "name": "BaseBdev4", 00:11:12.541 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:12.541 "is_configured": true, 00:11:12.541 "data_offset": 0, 00:11:12.541 "data_size": 65536 00:11:12.541 } 00:11:12.541 ] 00:11:12.541 }' 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.541 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.800 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.800 [2024-11-20 03:18:02.423138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.060 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.060 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.060 "name": "Existed_Raid", 00:11:13.060 "aliases": [ 00:11:13.060 "26859f33-7df9-4ec0-adb6-35e0c04e4a49" 00:11:13.060 ], 00:11:13.060 "product_name": "Raid Volume", 00:11:13.060 "block_size": 512, 00:11:13.060 "num_blocks": 65536, 00:11:13.060 "uuid": "26859f33-7df9-4ec0-adb6-35e0c04e4a49", 00:11:13.060 "assigned_rate_limits": { 00:11:13.060 "rw_ios_per_sec": 0, 00:11:13.060 "rw_mbytes_per_sec": 0, 00:11:13.060 "r_mbytes_per_sec": 0, 00:11:13.060 "w_mbytes_per_sec": 0 00:11:13.060 }, 00:11:13.060 "claimed": false, 00:11:13.060 "zoned": false, 00:11:13.060 "supported_io_types": { 00:11:13.060 "read": true, 00:11:13.060 "write": true, 00:11:13.060 "unmap": false, 00:11:13.060 "flush": false, 00:11:13.060 "reset": true, 00:11:13.060 "nvme_admin": false, 00:11:13.060 "nvme_io": false, 00:11:13.060 "nvme_io_md": false, 00:11:13.060 "write_zeroes": true, 00:11:13.060 "zcopy": false, 00:11:13.060 "get_zone_info": false, 00:11:13.060 "zone_management": false, 00:11:13.060 "zone_append": false, 00:11:13.060 "compare": false, 00:11:13.060 "compare_and_write": false, 00:11:13.060 "abort": false, 00:11:13.060 "seek_hole": false, 00:11:13.060 "seek_data": false, 00:11:13.060 "copy": false, 00:11:13.060 "nvme_iov_md": false 00:11:13.060 }, 00:11:13.060 "memory_domains": [ 00:11:13.060 { 00:11:13.060 "dma_device_id": "system", 00:11:13.060 "dma_device_type": 1 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.060 "dma_device_type": 2 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "dma_device_id": "system", 00:11:13.060 "dma_device_type": 1 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.060 "dma_device_type": 2 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "dma_device_id": "system", 00:11:13.060 "dma_device_type": 1 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.060 "dma_device_type": 2 00:11:13.060 }, 00:11:13.060 { 00:11:13.061 "dma_device_id": "system", 00:11:13.061 "dma_device_type": 1 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.061 "dma_device_type": 2 00:11:13.061 } 00:11:13.061 ], 00:11:13.061 "driver_specific": { 00:11:13.061 "raid": { 00:11:13.061 "uuid": "26859f33-7df9-4ec0-adb6-35e0c04e4a49", 00:11:13.061 "strip_size_kb": 0, 00:11:13.061 "state": "online", 00:11:13.061 "raid_level": "raid1", 00:11:13.061 "superblock": false, 00:11:13.061 "num_base_bdevs": 4, 00:11:13.061 "num_base_bdevs_discovered": 4, 00:11:13.061 "num_base_bdevs_operational": 4, 00:11:13.061 "base_bdevs_list": [ 00:11:13.061 { 00:11:13.061 "name": "NewBaseBdev", 00:11:13.061 "uuid": "9e5c4061-2388-4b00-bccf-27c25ed328e7", 00:11:13.061 "is_configured": true, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 65536 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "name": "BaseBdev2", 00:11:13.061 "uuid": "a75a2246-38ee-423a-82f4-5cc85a8ef75b", 00:11:13.061 "is_configured": true, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 65536 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "name": "BaseBdev3", 00:11:13.061 "uuid": "745b2142-8ebf-4f53-98eb-4e4fca3f6b09", 00:11:13.061 "is_configured": true, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 65536 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "name": "BaseBdev4", 00:11:13.061 "uuid": "106be013-dc4a-48c9-82a9-2c0183fc87d7", 00:11:13.061 "is_configured": true, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 65536 00:11:13.061 } 00:11:13.061 ] 00:11:13.061 } 00:11:13.061 } 00:11:13.061 }' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.061 BaseBdev2 00:11:13.061 BaseBdev3 00:11:13.061 BaseBdev4' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.061 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.321 [2024-11-20 03:18:02.762192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.321 [2024-11-20 03:18:02.762277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.321 [2024-11-20 03:18:02.762369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.321 [2024-11-20 03:18:02.762722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.321 [2024-11-20 03:18:02.762741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73020 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73020 ']' 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73020 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:13.321 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.322 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73020 00:11:13.322 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.322 killing process with pid 73020 00:11:13.322 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.322 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73020' 00:11:13.322 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73020 00:11:13.322 [2024-11-20 03:18:02.804574] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.322 03:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73020 00:11:13.891 [2024-11-20 03:18:03.214997] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:14.831 00:11:14.831 real 0m11.316s 00:11:14.831 user 0m17.908s 00:11:14.831 sys 0m1.971s 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.831 ************************************ 00:11:14.831 END TEST raid_state_function_test 00:11:14.831 ************************************ 00:11:14.831 03:18:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:14.831 03:18:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.831 03:18:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.831 03:18:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.831 ************************************ 00:11:14.831 START TEST raid_state_function_test_sb 00:11:14.831 ************************************ 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:14.831 Process raid pid: 73691 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73691 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73691' 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73691 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73691 ']' 00:11:14.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.831 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.092 [2024-11-20 03:18:04.516090] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:15.092 [2024-11-20 03:18:04.516203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.092 [2024-11-20 03:18:04.694166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.351 [2024-11-20 03:18:04.809455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.611 [2024-11-20 03:18:05.015536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.611 [2024-11-20 03:18:05.015686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.871 [2024-11-20 03:18:05.368635] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.871 [2024-11-20 03:18:05.368688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.871 [2024-11-20 03:18:05.368699] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.871 [2024-11-20 03:18:05.368709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.871 [2024-11-20 03:18:05.368715] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.871 [2024-11-20 03:18:05.368724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.871 [2024-11-20 03:18:05.368730] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.871 [2024-11-20 03:18:05.368738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.871 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.871 "name": "Existed_Raid", 00:11:15.871 "uuid": "6b0453e6-a009-4e9f-9481-bfecb5d443d4", 00:11:15.871 "strip_size_kb": 0, 00:11:15.871 "state": "configuring", 00:11:15.871 "raid_level": "raid1", 00:11:15.871 "superblock": true, 00:11:15.871 "num_base_bdevs": 4, 00:11:15.871 "num_base_bdevs_discovered": 0, 00:11:15.871 "num_base_bdevs_operational": 4, 00:11:15.871 "base_bdevs_list": [ 00:11:15.871 { 00:11:15.871 "name": "BaseBdev1", 00:11:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.871 "is_configured": false, 00:11:15.871 "data_offset": 0, 00:11:15.871 "data_size": 0 00:11:15.871 }, 00:11:15.871 { 00:11:15.871 "name": "BaseBdev2", 00:11:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.871 "is_configured": false, 00:11:15.871 "data_offset": 0, 00:11:15.871 "data_size": 0 00:11:15.871 }, 00:11:15.871 { 00:11:15.871 "name": "BaseBdev3", 00:11:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.871 "is_configured": false, 00:11:15.871 "data_offset": 0, 00:11:15.871 "data_size": 0 00:11:15.871 }, 00:11:15.871 { 00:11:15.871 "name": "BaseBdev4", 00:11:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.871 "is_configured": false, 00:11:15.871 "data_offset": 0, 00:11:15.871 "data_size": 0 00:11:15.872 } 00:11:15.872 ] 00:11:15.872 }' 00:11:15.872 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.872 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.439 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.439 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.440 [2024-11-20 03:18:05.851773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.440 [2024-11-20 03:18:05.851874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.440 [2024-11-20 03:18:05.859751] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.440 [2024-11-20 03:18:05.859857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.440 [2024-11-20 03:18:05.859900] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.440 [2024-11-20 03:18:05.859925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.440 [2024-11-20 03:18:05.859944] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.440 [2024-11-20 03:18:05.859965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.440 [2024-11-20 03:18:05.859983] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.440 [2024-11-20 03:18:05.860004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.440 [2024-11-20 03:18:05.904688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.440 BaseBdev1 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.440 [ 00:11:16.440 { 00:11:16.440 "name": "BaseBdev1", 00:11:16.440 "aliases": [ 00:11:16.440 "9771209f-08b0-45a4-8ccb-ed7f0117a751" 00:11:16.440 ], 00:11:16.440 "product_name": "Malloc disk", 00:11:16.440 "block_size": 512, 00:11:16.440 "num_blocks": 65536, 00:11:16.440 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:16.440 "assigned_rate_limits": { 00:11:16.440 "rw_ios_per_sec": 0, 00:11:16.440 "rw_mbytes_per_sec": 0, 00:11:16.440 "r_mbytes_per_sec": 0, 00:11:16.440 "w_mbytes_per_sec": 0 00:11:16.440 }, 00:11:16.440 "claimed": true, 00:11:16.440 "claim_type": "exclusive_write", 00:11:16.440 "zoned": false, 00:11:16.440 "supported_io_types": { 00:11:16.440 "read": true, 00:11:16.440 "write": true, 00:11:16.440 "unmap": true, 00:11:16.440 "flush": true, 00:11:16.440 "reset": true, 00:11:16.440 "nvme_admin": false, 00:11:16.440 "nvme_io": false, 00:11:16.440 "nvme_io_md": false, 00:11:16.440 "write_zeroes": true, 00:11:16.440 "zcopy": true, 00:11:16.440 "get_zone_info": false, 00:11:16.440 "zone_management": false, 00:11:16.440 "zone_append": false, 00:11:16.440 "compare": false, 00:11:16.440 "compare_and_write": false, 00:11:16.440 "abort": true, 00:11:16.440 "seek_hole": false, 00:11:16.440 "seek_data": false, 00:11:16.440 "copy": true, 00:11:16.440 "nvme_iov_md": false 00:11:16.440 }, 00:11:16.440 "memory_domains": [ 00:11:16.440 { 00:11:16.440 "dma_device_id": "system", 00:11:16.440 "dma_device_type": 1 00:11:16.440 }, 00:11:16.440 { 00:11:16.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.440 "dma_device_type": 2 00:11:16.440 } 00:11:16.440 ], 00:11:16.440 "driver_specific": {} 00:11:16.440 } 00:11:16.440 ] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.440 "name": "Existed_Raid", 00:11:16.440 "uuid": "a32d79e5-b8a8-416f-8b31-1b70fa660f9e", 00:11:16.440 "strip_size_kb": 0, 00:11:16.440 "state": "configuring", 00:11:16.440 "raid_level": "raid1", 00:11:16.440 "superblock": true, 00:11:16.440 "num_base_bdevs": 4, 00:11:16.440 "num_base_bdevs_discovered": 1, 00:11:16.440 "num_base_bdevs_operational": 4, 00:11:16.440 "base_bdevs_list": [ 00:11:16.440 { 00:11:16.440 "name": "BaseBdev1", 00:11:16.440 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:16.440 "is_configured": true, 00:11:16.440 "data_offset": 2048, 00:11:16.440 "data_size": 63488 00:11:16.440 }, 00:11:16.440 { 00:11:16.440 "name": "BaseBdev2", 00:11:16.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.440 "is_configured": false, 00:11:16.440 "data_offset": 0, 00:11:16.440 "data_size": 0 00:11:16.440 }, 00:11:16.440 { 00:11:16.440 "name": "BaseBdev3", 00:11:16.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.440 "is_configured": false, 00:11:16.440 "data_offset": 0, 00:11:16.440 "data_size": 0 00:11:16.440 }, 00:11:16.440 { 00:11:16.440 "name": "BaseBdev4", 00:11:16.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.440 "is_configured": false, 00:11:16.440 "data_offset": 0, 00:11:16.440 "data_size": 0 00:11:16.440 } 00:11:16.440 ] 00:11:16.440 }' 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.440 03:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.012 [2024-11-20 03:18:06.371937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:17.012 [2024-11-20 03:18:06.372002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.012 [2024-11-20 03:18:06.384024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.012 [2024-11-20 03:18:06.386137] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.012 [2024-11-20 03:18:06.386189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.012 [2024-11-20 03:18:06.386201] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.012 [2024-11-20 03:18:06.386214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.012 [2024-11-20 03:18:06.386222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:17.012 [2024-11-20 03:18:06.386232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:17.012 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.013 "name": "Existed_Raid", 00:11:17.013 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:17.013 "strip_size_kb": 0, 00:11:17.013 "state": "configuring", 00:11:17.013 "raid_level": "raid1", 00:11:17.013 "superblock": true, 00:11:17.013 "num_base_bdevs": 4, 00:11:17.013 "num_base_bdevs_discovered": 1, 00:11:17.013 "num_base_bdevs_operational": 4, 00:11:17.013 "base_bdevs_list": [ 00:11:17.013 { 00:11:17.013 "name": "BaseBdev1", 00:11:17.013 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:17.013 "is_configured": true, 00:11:17.013 "data_offset": 2048, 00:11:17.013 "data_size": 63488 00:11:17.013 }, 00:11:17.013 { 00:11:17.013 "name": "BaseBdev2", 00:11:17.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.013 "is_configured": false, 00:11:17.013 "data_offset": 0, 00:11:17.013 "data_size": 0 00:11:17.013 }, 00:11:17.013 { 00:11:17.013 "name": "BaseBdev3", 00:11:17.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.013 "is_configured": false, 00:11:17.013 "data_offset": 0, 00:11:17.013 "data_size": 0 00:11:17.013 }, 00:11:17.013 { 00:11:17.013 "name": "BaseBdev4", 00:11:17.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.013 "is_configured": false, 00:11:17.013 "data_offset": 0, 00:11:17.013 "data_size": 0 00:11:17.013 } 00:11:17.013 ] 00:11:17.013 }' 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.013 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.274 [2024-11-20 03:18:06.827652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.274 BaseBdev2 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.274 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.275 [ 00:11:17.275 { 00:11:17.275 "name": "BaseBdev2", 00:11:17.275 "aliases": [ 00:11:17.275 "f9245bfc-7426-4c4c-8d18-63817d23f12e" 00:11:17.275 ], 00:11:17.275 "product_name": "Malloc disk", 00:11:17.275 "block_size": 512, 00:11:17.275 "num_blocks": 65536, 00:11:17.275 "uuid": "f9245bfc-7426-4c4c-8d18-63817d23f12e", 00:11:17.275 "assigned_rate_limits": { 00:11:17.275 "rw_ios_per_sec": 0, 00:11:17.275 "rw_mbytes_per_sec": 0, 00:11:17.275 "r_mbytes_per_sec": 0, 00:11:17.275 "w_mbytes_per_sec": 0 00:11:17.275 }, 00:11:17.275 "claimed": true, 00:11:17.275 "claim_type": "exclusive_write", 00:11:17.275 "zoned": false, 00:11:17.275 "supported_io_types": { 00:11:17.275 "read": true, 00:11:17.275 "write": true, 00:11:17.275 "unmap": true, 00:11:17.275 "flush": true, 00:11:17.275 "reset": true, 00:11:17.275 "nvme_admin": false, 00:11:17.275 "nvme_io": false, 00:11:17.275 "nvme_io_md": false, 00:11:17.275 "write_zeroes": true, 00:11:17.275 "zcopy": true, 00:11:17.275 "get_zone_info": false, 00:11:17.275 "zone_management": false, 00:11:17.275 "zone_append": false, 00:11:17.275 "compare": false, 00:11:17.275 "compare_and_write": false, 00:11:17.275 "abort": true, 00:11:17.275 "seek_hole": false, 00:11:17.275 "seek_data": false, 00:11:17.275 "copy": true, 00:11:17.275 "nvme_iov_md": false 00:11:17.275 }, 00:11:17.275 "memory_domains": [ 00:11:17.275 { 00:11:17.275 "dma_device_id": "system", 00:11:17.275 "dma_device_type": 1 00:11:17.275 }, 00:11:17.275 { 00:11:17.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.275 "dma_device_type": 2 00:11:17.275 } 00:11:17.275 ], 00:11:17.275 "driver_specific": {} 00:11:17.275 } 00:11:17.275 ] 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.275 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.542 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.542 "name": "Existed_Raid", 00:11:17.542 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:17.542 "strip_size_kb": 0, 00:11:17.542 "state": "configuring", 00:11:17.542 "raid_level": "raid1", 00:11:17.542 "superblock": true, 00:11:17.542 "num_base_bdevs": 4, 00:11:17.542 "num_base_bdevs_discovered": 2, 00:11:17.542 "num_base_bdevs_operational": 4, 00:11:17.542 "base_bdevs_list": [ 00:11:17.542 { 00:11:17.542 "name": "BaseBdev1", 00:11:17.542 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:17.542 "is_configured": true, 00:11:17.542 "data_offset": 2048, 00:11:17.542 "data_size": 63488 00:11:17.542 }, 00:11:17.542 { 00:11:17.542 "name": "BaseBdev2", 00:11:17.542 "uuid": "f9245bfc-7426-4c4c-8d18-63817d23f12e", 00:11:17.542 "is_configured": true, 00:11:17.542 "data_offset": 2048, 00:11:17.542 "data_size": 63488 00:11:17.542 }, 00:11:17.542 { 00:11:17.542 "name": "BaseBdev3", 00:11:17.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.542 "is_configured": false, 00:11:17.542 "data_offset": 0, 00:11:17.542 "data_size": 0 00:11:17.542 }, 00:11:17.542 { 00:11:17.542 "name": "BaseBdev4", 00:11:17.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.542 "is_configured": false, 00:11:17.542 "data_offset": 0, 00:11:17.542 "data_size": 0 00:11:17.542 } 00:11:17.542 ] 00:11:17.542 }' 00:11:17.542 03:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.542 03:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.802 [2024-11-20 03:18:07.368011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.802 BaseBdev3 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.802 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.802 [ 00:11:17.802 { 00:11:17.802 "name": "BaseBdev3", 00:11:17.802 "aliases": [ 00:11:17.802 "9c6387a3-d7aa-433a-93a5-691eeae8659e" 00:11:17.802 ], 00:11:17.802 "product_name": "Malloc disk", 00:11:17.802 "block_size": 512, 00:11:17.802 "num_blocks": 65536, 00:11:17.802 "uuid": "9c6387a3-d7aa-433a-93a5-691eeae8659e", 00:11:17.803 "assigned_rate_limits": { 00:11:17.803 "rw_ios_per_sec": 0, 00:11:17.803 "rw_mbytes_per_sec": 0, 00:11:17.803 "r_mbytes_per_sec": 0, 00:11:17.803 "w_mbytes_per_sec": 0 00:11:17.803 }, 00:11:17.803 "claimed": true, 00:11:17.803 "claim_type": "exclusive_write", 00:11:17.803 "zoned": false, 00:11:17.803 "supported_io_types": { 00:11:17.803 "read": true, 00:11:17.803 "write": true, 00:11:17.803 "unmap": true, 00:11:17.803 "flush": true, 00:11:17.803 "reset": true, 00:11:17.803 "nvme_admin": false, 00:11:17.803 "nvme_io": false, 00:11:17.803 "nvme_io_md": false, 00:11:17.803 "write_zeroes": true, 00:11:17.803 "zcopy": true, 00:11:17.803 "get_zone_info": false, 00:11:17.803 "zone_management": false, 00:11:17.803 "zone_append": false, 00:11:17.803 "compare": false, 00:11:17.803 "compare_and_write": false, 00:11:17.803 "abort": true, 00:11:17.803 "seek_hole": false, 00:11:17.803 "seek_data": false, 00:11:17.803 "copy": true, 00:11:17.803 "nvme_iov_md": false 00:11:17.803 }, 00:11:17.803 "memory_domains": [ 00:11:17.803 { 00:11:17.803 "dma_device_id": "system", 00:11:17.803 "dma_device_type": 1 00:11:17.803 }, 00:11:17.803 { 00:11:17.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.803 "dma_device_type": 2 00:11:17.803 } 00:11:17.803 ], 00:11:17.803 "driver_specific": {} 00:11:17.803 } 00:11:17.803 ] 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.803 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.062 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.062 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.062 "name": "Existed_Raid", 00:11:18.062 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:18.062 "strip_size_kb": 0, 00:11:18.062 "state": "configuring", 00:11:18.062 "raid_level": "raid1", 00:11:18.062 "superblock": true, 00:11:18.062 "num_base_bdevs": 4, 00:11:18.062 "num_base_bdevs_discovered": 3, 00:11:18.062 "num_base_bdevs_operational": 4, 00:11:18.062 "base_bdevs_list": [ 00:11:18.062 { 00:11:18.062 "name": "BaseBdev1", 00:11:18.062 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:18.062 "is_configured": true, 00:11:18.062 "data_offset": 2048, 00:11:18.062 "data_size": 63488 00:11:18.062 }, 00:11:18.062 { 00:11:18.062 "name": "BaseBdev2", 00:11:18.062 "uuid": "f9245bfc-7426-4c4c-8d18-63817d23f12e", 00:11:18.062 "is_configured": true, 00:11:18.062 "data_offset": 2048, 00:11:18.062 "data_size": 63488 00:11:18.062 }, 00:11:18.062 { 00:11:18.063 "name": "BaseBdev3", 00:11:18.063 "uuid": "9c6387a3-d7aa-433a-93a5-691eeae8659e", 00:11:18.063 "is_configured": true, 00:11:18.063 "data_offset": 2048, 00:11:18.063 "data_size": 63488 00:11:18.063 }, 00:11:18.063 { 00:11:18.063 "name": "BaseBdev4", 00:11:18.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.063 "is_configured": false, 00:11:18.063 "data_offset": 0, 00:11:18.063 "data_size": 0 00:11:18.063 } 00:11:18.063 ] 00:11:18.063 }' 00:11:18.063 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.063 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.323 [2024-11-20 03:18:07.878967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.323 [2024-11-20 03:18:07.879371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.323 [2024-11-20 03:18:07.879434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.323 [2024-11-20 03:18:07.879766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.323 BaseBdev4 00:11:18.323 [2024-11-20 03:18:07.880007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.323 [2024-11-20 03:18:07.880022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:18.323 [2024-11-20 03:18:07.880180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.323 [ 00:11:18.323 { 00:11:18.323 "name": "BaseBdev4", 00:11:18.323 "aliases": [ 00:11:18.323 "829ea326-848c-44ca-867a-efbf6c60f635" 00:11:18.323 ], 00:11:18.323 "product_name": "Malloc disk", 00:11:18.323 "block_size": 512, 00:11:18.323 "num_blocks": 65536, 00:11:18.323 "uuid": "829ea326-848c-44ca-867a-efbf6c60f635", 00:11:18.323 "assigned_rate_limits": { 00:11:18.323 "rw_ios_per_sec": 0, 00:11:18.323 "rw_mbytes_per_sec": 0, 00:11:18.323 "r_mbytes_per_sec": 0, 00:11:18.323 "w_mbytes_per_sec": 0 00:11:18.323 }, 00:11:18.323 "claimed": true, 00:11:18.323 "claim_type": "exclusive_write", 00:11:18.323 "zoned": false, 00:11:18.323 "supported_io_types": { 00:11:18.323 "read": true, 00:11:18.323 "write": true, 00:11:18.323 "unmap": true, 00:11:18.323 "flush": true, 00:11:18.323 "reset": true, 00:11:18.323 "nvme_admin": false, 00:11:18.323 "nvme_io": false, 00:11:18.323 "nvme_io_md": false, 00:11:18.323 "write_zeroes": true, 00:11:18.323 "zcopy": true, 00:11:18.323 "get_zone_info": false, 00:11:18.323 "zone_management": false, 00:11:18.323 "zone_append": false, 00:11:18.323 "compare": false, 00:11:18.323 "compare_and_write": false, 00:11:18.323 "abort": true, 00:11:18.323 "seek_hole": false, 00:11:18.323 "seek_data": false, 00:11:18.323 "copy": true, 00:11:18.323 "nvme_iov_md": false 00:11:18.323 }, 00:11:18.323 "memory_domains": [ 00:11:18.323 { 00:11:18.323 "dma_device_id": "system", 00:11:18.323 "dma_device_type": 1 00:11:18.323 }, 00:11:18.323 { 00:11:18.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.323 "dma_device_type": 2 00:11:18.323 } 00:11:18.323 ], 00:11:18.323 "driver_specific": {} 00:11:18.323 } 00:11:18.323 ] 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.323 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.324 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.324 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.324 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.324 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.324 "name": "Existed_Raid", 00:11:18.324 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:18.324 "strip_size_kb": 0, 00:11:18.324 "state": "online", 00:11:18.324 "raid_level": "raid1", 00:11:18.324 "superblock": true, 00:11:18.324 "num_base_bdevs": 4, 00:11:18.324 "num_base_bdevs_discovered": 4, 00:11:18.324 "num_base_bdevs_operational": 4, 00:11:18.324 "base_bdevs_list": [ 00:11:18.324 { 00:11:18.324 "name": "BaseBdev1", 00:11:18.324 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:18.324 "is_configured": true, 00:11:18.324 "data_offset": 2048, 00:11:18.324 "data_size": 63488 00:11:18.324 }, 00:11:18.324 { 00:11:18.324 "name": "BaseBdev2", 00:11:18.324 "uuid": "f9245bfc-7426-4c4c-8d18-63817d23f12e", 00:11:18.324 "is_configured": true, 00:11:18.324 "data_offset": 2048, 00:11:18.324 "data_size": 63488 00:11:18.324 }, 00:11:18.324 { 00:11:18.324 "name": "BaseBdev3", 00:11:18.324 "uuid": "9c6387a3-d7aa-433a-93a5-691eeae8659e", 00:11:18.324 "is_configured": true, 00:11:18.324 "data_offset": 2048, 00:11:18.324 "data_size": 63488 00:11:18.324 }, 00:11:18.324 { 00:11:18.324 "name": "BaseBdev4", 00:11:18.324 "uuid": "829ea326-848c-44ca-867a-efbf6c60f635", 00:11:18.324 "is_configured": true, 00:11:18.324 "data_offset": 2048, 00:11:18.324 "data_size": 63488 00:11:18.324 } 00:11:18.324 ] 00:11:18.324 }' 00:11:18.324 03:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.324 03:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.894 [2024-11-20 03:18:08.346685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.894 "name": "Existed_Raid", 00:11:18.894 "aliases": [ 00:11:18.894 "2d73d73e-a5b6-496d-88de-f4a169a70215" 00:11:18.894 ], 00:11:18.894 "product_name": "Raid Volume", 00:11:18.894 "block_size": 512, 00:11:18.894 "num_blocks": 63488, 00:11:18.894 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:18.894 "assigned_rate_limits": { 00:11:18.894 "rw_ios_per_sec": 0, 00:11:18.894 "rw_mbytes_per_sec": 0, 00:11:18.894 "r_mbytes_per_sec": 0, 00:11:18.894 "w_mbytes_per_sec": 0 00:11:18.894 }, 00:11:18.894 "claimed": false, 00:11:18.894 "zoned": false, 00:11:18.894 "supported_io_types": { 00:11:18.894 "read": true, 00:11:18.894 "write": true, 00:11:18.894 "unmap": false, 00:11:18.894 "flush": false, 00:11:18.894 "reset": true, 00:11:18.894 "nvme_admin": false, 00:11:18.894 "nvme_io": false, 00:11:18.894 "nvme_io_md": false, 00:11:18.894 "write_zeroes": true, 00:11:18.894 "zcopy": false, 00:11:18.894 "get_zone_info": false, 00:11:18.894 "zone_management": false, 00:11:18.894 "zone_append": false, 00:11:18.894 "compare": false, 00:11:18.894 "compare_and_write": false, 00:11:18.894 "abort": false, 00:11:18.894 "seek_hole": false, 00:11:18.894 "seek_data": false, 00:11:18.894 "copy": false, 00:11:18.894 "nvme_iov_md": false 00:11:18.894 }, 00:11:18.894 "memory_domains": [ 00:11:18.894 { 00:11:18.894 "dma_device_id": "system", 00:11:18.894 "dma_device_type": 1 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.894 "dma_device_type": 2 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "system", 00:11:18.894 "dma_device_type": 1 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.894 "dma_device_type": 2 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "system", 00:11:18.894 "dma_device_type": 1 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.894 "dma_device_type": 2 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "system", 00:11:18.894 "dma_device_type": 1 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.894 "dma_device_type": 2 00:11:18.894 } 00:11:18.894 ], 00:11:18.894 "driver_specific": { 00:11:18.894 "raid": { 00:11:18.894 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:18.894 "strip_size_kb": 0, 00:11:18.894 "state": "online", 00:11:18.894 "raid_level": "raid1", 00:11:18.894 "superblock": true, 00:11:18.894 "num_base_bdevs": 4, 00:11:18.894 "num_base_bdevs_discovered": 4, 00:11:18.894 "num_base_bdevs_operational": 4, 00:11:18.894 "base_bdevs_list": [ 00:11:18.894 { 00:11:18.894 "name": "BaseBdev1", 00:11:18.894 "uuid": "9771209f-08b0-45a4-8ccb-ed7f0117a751", 00:11:18.894 "is_configured": true, 00:11:18.894 "data_offset": 2048, 00:11:18.894 "data_size": 63488 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "name": "BaseBdev2", 00:11:18.894 "uuid": "f9245bfc-7426-4c4c-8d18-63817d23f12e", 00:11:18.894 "is_configured": true, 00:11:18.894 "data_offset": 2048, 00:11:18.894 "data_size": 63488 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "name": "BaseBdev3", 00:11:18.894 "uuid": "9c6387a3-d7aa-433a-93a5-691eeae8659e", 00:11:18.894 "is_configured": true, 00:11:18.894 "data_offset": 2048, 00:11:18.894 "data_size": 63488 00:11:18.894 }, 00:11:18.894 { 00:11:18.894 "name": "BaseBdev4", 00:11:18.894 "uuid": "829ea326-848c-44ca-867a-efbf6c60f635", 00:11:18.894 "is_configured": true, 00:11:18.894 "data_offset": 2048, 00:11:18.894 "data_size": 63488 00:11:18.894 } 00:11:18.894 ] 00:11:18.894 } 00:11:18.894 } 00:11:18.894 }' 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:18.894 BaseBdev2 00:11:18.894 BaseBdev3 00:11:18.894 BaseBdev4' 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.894 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.895 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.154 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.155 [2024-11-20 03:18:08.637862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.155 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.414 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.414 "name": "Existed_Raid", 00:11:19.414 "uuid": "2d73d73e-a5b6-496d-88de-f4a169a70215", 00:11:19.414 "strip_size_kb": 0, 00:11:19.414 "state": "online", 00:11:19.414 "raid_level": "raid1", 00:11:19.414 "superblock": true, 00:11:19.414 "num_base_bdevs": 4, 00:11:19.414 "num_base_bdevs_discovered": 3, 00:11:19.414 "num_base_bdevs_operational": 3, 00:11:19.414 "base_bdevs_list": [ 00:11:19.414 { 00:11:19.414 "name": null, 00:11:19.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.414 "is_configured": false, 00:11:19.414 "data_offset": 0, 00:11:19.414 "data_size": 63488 00:11:19.414 }, 00:11:19.414 { 00:11:19.414 "name": "BaseBdev2", 00:11:19.414 "uuid": "f9245bfc-7426-4c4c-8d18-63817d23f12e", 00:11:19.414 "is_configured": true, 00:11:19.414 "data_offset": 2048, 00:11:19.414 "data_size": 63488 00:11:19.414 }, 00:11:19.414 { 00:11:19.414 "name": "BaseBdev3", 00:11:19.414 "uuid": "9c6387a3-d7aa-433a-93a5-691eeae8659e", 00:11:19.414 "is_configured": true, 00:11:19.414 "data_offset": 2048, 00:11:19.414 "data_size": 63488 00:11:19.414 }, 00:11:19.414 { 00:11:19.414 "name": "BaseBdev4", 00:11:19.414 "uuid": "829ea326-848c-44ca-867a-efbf6c60f635", 00:11:19.414 "is_configured": true, 00:11:19.414 "data_offset": 2048, 00:11:19.414 "data_size": 63488 00:11:19.414 } 00:11:19.414 ] 00:11:19.414 }' 00:11:19.415 03:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.415 03:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.674 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.674 [2024-11-20 03:18:09.275733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 [2024-11-20 03:18:09.431016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.934 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.934 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.934 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.934 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.934 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.934 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 [2024-11-20 03:18:09.588733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:20.193 [2024-11-20 03:18:09.588921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.193 [2024-11-20 03:18:09.694308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.193 [2024-11-20 03:18:09.694362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.193 [2024-11-20 03:18:09.694374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 BaseBdev2 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 [ 00:11:20.193 { 00:11:20.193 "name": "BaseBdev2", 00:11:20.193 "aliases": [ 00:11:20.193 "9a5f7d65-f910-43df-bc29-4a5c1a381201" 00:11:20.193 ], 00:11:20.193 "product_name": "Malloc disk", 00:11:20.193 "block_size": 512, 00:11:20.193 "num_blocks": 65536, 00:11:20.193 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:20.193 "assigned_rate_limits": { 00:11:20.193 "rw_ios_per_sec": 0, 00:11:20.193 "rw_mbytes_per_sec": 0, 00:11:20.193 "r_mbytes_per_sec": 0, 00:11:20.193 "w_mbytes_per_sec": 0 00:11:20.193 }, 00:11:20.193 "claimed": false, 00:11:20.193 "zoned": false, 00:11:20.193 "supported_io_types": { 00:11:20.193 "read": true, 00:11:20.193 "write": true, 00:11:20.193 "unmap": true, 00:11:20.193 "flush": true, 00:11:20.193 "reset": true, 00:11:20.193 "nvme_admin": false, 00:11:20.193 "nvme_io": false, 00:11:20.193 "nvme_io_md": false, 00:11:20.193 "write_zeroes": true, 00:11:20.193 "zcopy": true, 00:11:20.193 "get_zone_info": false, 00:11:20.193 "zone_management": false, 00:11:20.193 "zone_append": false, 00:11:20.193 "compare": false, 00:11:20.193 "compare_and_write": false, 00:11:20.193 "abort": true, 00:11:20.193 "seek_hole": false, 00:11:20.193 "seek_data": false, 00:11:20.193 "copy": true, 00:11:20.193 "nvme_iov_md": false 00:11:20.193 }, 00:11:20.193 "memory_domains": [ 00:11:20.193 { 00:11:20.193 "dma_device_id": "system", 00:11:20.193 "dma_device_type": 1 00:11:20.193 }, 00:11:20.193 { 00:11:20.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.193 "dma_device_type": 2 00:11:20.193 } 00:11:20.193 ], 00:11:20.193 "driver_specific": {} 00:11:20.193 } 00:11:20.193 ] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.193 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.454 BaseBdev3 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.454 [ 00:11:20.454 { 00:11:20.454 "name": "BaseBdev3", 00:11:20.454 "aliases": [ 00:11:20.454 "5f2fbfbb-f7be-4a46-a390-aeb521b74237" 00:11:20.454 ], 00:11:20.454 "product_name": "Malloc disk", 00:11:20.454 "block_size": 512, 00:11:20.454 "num_blocks": 65536, 00:11:20.454 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:20.454 "assigned_rate_limits": { 00:11:20.454 "rw_ios_per_sec": 0, 00:11:20.454 "rw_mbytes_per_sec": 0, 00:11:20.454 "r_mbytes_per_sec": 0, 00:11:20.454 "w_mbytes_per_sec": 0 00:11:20.454 }, 00:11:20.454 "claimed": false, 00:11:20.454 "zoned": false, 00:11:20.454 "supported_io_types": { 00:11:20.454 "read": true, 00:11:20.454 "write": true, 00:11:20.454 "unmap": true, 00:11:20.454 "flush": true, 00:11:20.454 "reset": true, 00:11:20.454 "nvme_admin": false, 00:11:20.454 "nvme_io": false, 00:11:20.454 "nvme_io_md": false, 00:11:20.454 "write_zeroes": true, 00:11:20.454 "zcopy": true, 00:11:20.454 "get_zone_info": false, 00:11:20.454 "zone_management": false, 00:11:20.454 "zone_append": false, 00:11:20.454 "compare": false, 00:11:20.454 "compare_and_write": false, 00:11:20.454 "abort": true, 00:11:20.454 "seek_hole": false, 00:11:20.454 "seek_data": false, 00:11:20.454 "copy": true, 00:11:20.454 "nvme_iov_md": false 00:11:20.454 }, 00:11:20.454 "memory_domains": [ 00:11:20.454 { 00:11:20.454 "dma_device_id": "system", 00:11:20.454 "dma_device_type": 1 00:11:20.454 }, 00:11:20.454 { 00:11:20.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.454 "dma_device_type": 2 00:11:20.454 } 00:11:20.454 ], 00:11:20.454 "driver_specific": {} 00:11:20.454 } 00:11:20.454 ] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.454 BaseBdev4 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.454 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.454 [ 00:11:20.454 { 00:11:20.454 "name": "BaseBdev4", 00:11:20.454 "aliases": [ 00:11:20.454 "b8c2735b-562e-4f9a-a94a-9be53aab2033" 00:11:20.454 ], 00:11:20.454 "product_name": "Malloc disk", 00:11:20.454 "block_size": 512, 00:11:20.454 "num_blocks": 65536, 00:11:20.454 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:20.454 "assigned_rate_limits": { 00:11:20.454 "rw_ios_per_sec": 0, 00:11:20.454 "rw_mbytes_per_sec": 0, 00:11:20.454 "r_mbytes_per_sec": 0, 00:11:20.454 "w_mbytes_per_sec": 0 00:11:20.455 }, 00:11:20.455 "claimed": false, 00:11:20.455 "zoned": false, 00:11:20.455 "supported_io_types": { 00:11:20.455 "read": true, 00:11:20.455 "write": true, 00:11:20.455 "unmap": true, 00:11:20.455 "flush": true, 00:11:20.455 "reset": true, 00:11:20.455 "nvme_admin": false, 00:11:20.455 "nvme_io": false, 00:11:20.455 "nvme_io_md": false, 00:11:20.455 "write_zeroes": true, 00:11:20.455 "zcopy": true, 00:11:20.455 "get_zone_info": false, 00:11:20.455 "zone_management": false, 00:11:20.455 "zone_append": false, 00:11:20.455 "compare": false, 00:11:20.455 "compare_and_write": false, 00:11:20.455 "abort": true, 00:11:20.455 "seek_hole": false, 00:11:20.455 "seek_data": false, 00:11:20.455 "copy": true, 00:11:20.455 "nvme_iov_md": false 00:11:20.455 }, 00:11:20.455 "memory_domains": [ 00:11:20.455 { 00:11:20.455 "dma_device_id": "system", 00:11:20.455 "dma_device_type": 1 00:11:20.455 }, 00:11:20.455 { 00:11:20.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.455 "dma_device_type": 2 00:11:20.455 } 00:11:20.455 ], 00:11:20.455 "driver_specific": {} 00:11:20.455 } 00:11:20.455 ] 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.455 [2024-11-20 03:18:09.985261] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.455 [2024-11-20 03:18:09.985352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.455 [2024-11-20 03:18:09.985397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.455 [2024-11-20 03:18:09.987364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.455 [2024-11-20 03:18:09.987454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.455 03:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.455 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.455 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.455 "name": "Existed_Raid", 00:11:20.455 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:20.455 "strip_size_kb": 0, 00:11:20.455 "state": "configuring", 00:11:20.455 "raid_level": "raid1", 00:11:20.455 "superblock": true, 00:11:20.455 "num_base_bdevs": 4, 00:11:20.455 "num_base_bdevs_discovered": 3, 00:11:20.455 "num_base_bdevs_operational": 4, 00:11:20.455 "base_bdevs_list": [ 00:11:20.455 { 00:11:20.455 "name": "BaseBdev1", 00:11:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.455 "is_configured": false, 00:11:20.455 "data_offset": 0, 00:11:20.455 "data_size": 0 00:11:20.455 }, 00:11:20.455 { 00:11:20.455 "name": "BaseBdev2", 00:11:20.455 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:20.455 "is_configured": true, 00:11:20.455 "data_offset": 2048, 00:11:20.455 "data_size": 63488 00:11:20.455 }, 00:11:20.455 { 00:11:20.455 "name": "BaseBdev3", 00:11:20.455 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:20.455 "is_configured": true, 00:11:20.455 "data_offset": 2048, 00:11:20.455 "data_size": 63488 00:11:20.455 }, 00:11:20.455 { 00:11:20.455 "name": "BaseBdev4", 00:11:20.455 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:20.455 "is_configured": true, 00:11:20.455 "data_offset": 2048, 00:11:20.455 "data_size": 63488 00:11:20.455 } 00:11:20.455 ] 00:11:20.455 }' 00:11:20.455 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.455 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.025 [2024-11-20 03:18:10.452478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.025 "name": "Existed_Raid", 00:11:21.025 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:21.025 "strip_size_kb": 0, 00:11:21.025 "state": "configuring", 00:11:21.025 "raid_level": "raid1", 00:11:21.025 "superblock": true, 00:11:21.025 "num_base_bdevs": 4, 00:11:21.025 "num_base_bdevs_discovered": 2, 00:11:21.025 "num_base_bdevs_operational": 4, 00:11:21.025 "base_bdevs_list": [ 00:11:21.025 { 00:11:21.025 "name": "BaseBdev1", 00:11:21.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.025 "is_configured": false, 00:11:21.025 "data_offset": 0, 00:11:21.025 "data_size": 0 00:11:21.025 }, 00:11:21.025 { 00:11:21.025 "name": null, 00:11:21.025 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:21.025 "is_configured": false, 00:11:21.025 "data_offset": 0, 00:11:21.025 "data_size": 63488 00:11:21.025 }, 00:11:21.025 { 00:11:21.025 "name": "BaseBdev3", 00:11:21.025 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:21.025 "is_configured": true, 00:11:21.025 "data_offset": 2048, 00:11:21.025 "data_size": 63488 00:11:21.025 }, 00:11:21.025 { 00:11:21.025 "name": "BaseBdev4", 00:11:21.025 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:21.025 "is_configured": true, 00:11:21.025 "data_offset": 2048, 00:11:21.025 "data_size": 63488 00:11:21.025 } 00:11:21.025 ] 00:11:21.025 }' 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.025 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.285 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.285 [2024-11-20 03:18:10.917603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.544 BaseBdev1 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.544 [ 00:11:21.544 { 00:11:21.544 "name": "BaseBdev1", 00:11:21.544 "aliases": [ 00:11:21.544 "d820a81b-9d77-4def-9c40-3a8303f7917d" 00:11:21.544 ], 00:11:21.544 "product_name": "Malloc disk", 00:11:21.544 "block_size": 512, 00:11:21.544 "num_blocks": 65536, 00:11:21.544 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:21.544 "assigned_rate_limits": { 00:11:21.544 "rw_ios_per_sec": 0, 00:11:21.544 "rw_mbytes_per_sec": 0, 00:11:21.544 "r_mbytes_per_sec": 0, 00:11:21.544 "w_mbytes_per_sec": 0 00:11:21.544 }, 00:11:21.544 "claimed": true, 00:11:21.544 "claim_type": "exclusive_write", 00:11:21.544 "zoned": false, 00:11:21.544 "supported_io_types": { 00:11:21.544 "read": true, 00:11:21.544 "write": true, 00:11:21.544 "unmap": true, 00:11:21.544 "flush": true, 00:11:21.544 "reset": true, 00:11:21.544 "nvme_admin": false, 00:11:21.544 "nvme_io": false, 00:11:21.544 "nvme_io_md": false, 00:11:21.544 "write_zeroes": true, 00:11:21.544 "zcopy": true, 00:11:21.544 "get_zone_info": false, 00:11:21.544 "zone_management": false, 00:11:21.544 "zone_append": false, 00:11:21.544 "compare": false, 00:11:21.544 "compare_and_write": false, 00:11:21.544 "abort": true, 00:11:21.544 "seek_hole": false, 00:11:21.544 "seek_data": false, 00:11:21.544 "copy": true, 00:11:21.544 "nvme_iov_md": false 00:11:21.544 }, 00:11:21.544 "memory_domains": [ 00:11:21.544 { 00:11:21.544 "dma_device_id": "system", 00:11:21.544 "dma_device_type": 1 00:11:21.544 }, 00:11:21.544 { 00:11:21.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.544 "dma_device_type": 2 00:11:21.544 } 00:11:21.544 ], 00:11:21.544 "driver_specific": {} 00:11:21.544 } 00:11:21.544 ] 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.544 03:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.545 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.545 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.545 03:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.545 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.545 "name": "Existed_Raid", 00:11:21.545 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:21.545 "strip_size_kb": 0, 00:11:21.545 "state": "configuring", 00:11:21.545 "raid_level": "raid1", 00:11:21.545 "superblock": true, 00:11:21.545 "num_base_bdevs": 4, 00:11:21.545 "num_base_bdevs_discovered": 3, 00:11:21.545 "num_base_bdevs_operational": 4, 00:11:21.545 "base_bdevs_list": [ 00:11:21.545 { 00:11:21.545 "name": "BaseBdev1", 00:11:21.545 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:21.545 "is_configured": true, 00:11:21.545 "data_offset": 2048, 00:11:21.545 "data_size": 63488 00:11:21.545 }, 00:11:21.545 { 00:11:21.545 "name": null, 00:11:21.545 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:21.545 "is_configured": false, 00:11:21.545 "data_offset": 0, 00:11:21.545 "data_size": 63488 00:11:21.545 }, 00:11:21.545 { 00:11:21.545 "name": "BaseBdev3", 00:11:21.545 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:21.545 "is_configured": true, 00:11:21.545 "data_offset": 2048, 00:11:21.545 "data_size": 63488 00:11:21.545 }, 00:11:21.545 { 00:11:21.545 "name": "BaseBdev4", 00:11:21.545 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:21.545 "is_configured": true, 00:11:21.545 "data_offset": 2048, 00:11:21.545 "data_size": 63488 00:11:21.545 } 00:11:21.545 ] 00:11:21.545 }' 00:11:21.545 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.545 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.804 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.804 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.804 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.804 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.804 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 [2024-11-20 03:18:11.452810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.064 "name": "Existed_Raid", 00:11:22.064 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:22.064 "strip_size_kb": 0, 00:11:22.064 "state": "configuring", 00:11:22.064 "raid_level": "raid1", 00:11:22.064 "superblock": true, 00:11:22.064 "num_base_bdevs": 4, 00:11:22.064 "num_base_bdevs_discovered": 2, 00:11:22.064 "num_base_bdevs_operational": 4, 00:11:22.064 "base_bdevs_list": [ 00:11:22.064 { 00:11:22.064 "name": "BaseBdev1", 00:11:22.064 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:22.064 "is_configured": true, 00:11:22.064 "data_offset": 2048, 00:11:22.064 "data_size": 63488 00:11:22.064 }, 00:11:22.064 { 00:11:22.064 "name": null, 00:11:22.064 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:22.064 "is_configured": false, 00:11:22.064 "data_offset": 0, 00:11:22.064 "data_size": 63488 00:11:22.064 }, 00:11:22.064 { 00:11:22.064 "name": null, 00:11:22.064 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:22.064 "is_configured": false, 00:11:22.064 "data_offset": 0, 00:11:22.064 "data_size": 63488 00:11:22.064 }, 00:11:22.064 { 00:11:22.064 "name": "BaseBdev4", 00:11:22.064 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:22.064 "is_configured": true, 00:11:22.064 "data_offset": 2048, 00:11:22.064 "data_size": 63488 00:11:22.064 } 00:11:22.064 ] 00:11:22.064 }' 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.064 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.324 [2024-11-20 03:18:11.927973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.324 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.584 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.584 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.584 "name": "Existed_Raid", 00:11:22.584 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:22.584 "strip_size_kb": 0, 00:11:22.584 "state": "configuring", 00:11:22.584 "raid_level": "raid1", 00:11:22.584 "superblock": true, 00:11:22.584 "num_base_bdevs": 4, 00:11:22.584 "num_base_bdevs_discovered": 3, 00:11:22.584 "num_base_bdevs_operational": 4, 00:11:22.584 "base_bdevs_list": [ 00:11:22.584 { 00:11:22.584 "name": "BaseBdev1", 00:11:22.584 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:22.584 "is_configured": true, 00:11:22.584 "data_offset": 2048, 00:11:22.584 "data_size": 63488 00:11:22.584 }, 00:11:22.584 { 00:11:22.584 "name": null, 00:11:22.584 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:22.584 "is_configured": false, 00:11:22.584 "data_offset": 0, 00:11:22.584 "data_size": 63488 00:11:22.584 }, 00:11:22.584 { 00:11:22.584 "name": "BaseBdev3", 00:11:22.584 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:22.584 "is_configured": true, 00:11:22.584 "data_offset": 2048, 00:11:22.584 "data_size": 63488 00:11:22.584 }, 00:11:22.584 { 00:11:22.584 "name": "BaseBdev4", 00:11:22.584 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:22.584 "is_configured": true, 00:11:22.584 "data_offset": 2048, 00:11:22.584 "data_size": 63488 00:11:22.584 } 00:11:22.584 ] 00:11:22.584 }' 00:11:22.584 03:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.584 03:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.844 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.844 [2024-11-20 03:18:12.387250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.103 "name": "Existed_Raid", 00:11:23.103 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:23.103 "strip_size_kb": 0, 00:11:23.103 "state": "configuring", 00:11:23.103 "raid_level": "raid1", 00:11:23.103 "superblock": true, 00:11:23.103 "num_base_bdevs": 4, 00:11:23.103 "num_base_bdevs_discovered": 2, 00:11:23.103 "num_base_bdevs_operational": 4, 00:11:23.103 "base_bdevs_list": [ 00:11:23.103 { 00:11:23.103 "name": null, 00:11:23.103 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:23.103 "is_configured": false, 00:11:23.103 "data_offset": 0, 00:11:23.103 "data_size": 63488 00:11:23.103 }, 00:11:23.103 { 00:11:23.103 "name": null, 00:11:23.103 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:23.103 "is_configured": false, 00:11:23.103 "data_offset": 0, 00:11:23.103 "data_size": 63488 00:11:23.103 }, 00:11:23.103 { 00:11:23.103 "name": "BaseBdev3", 00:11:23.103 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:23.103 "is_configured": true, 00:11:23.103 "data_offset": 2048, 00:11:23.103 "data_size": 63488 00:11:23.103 }, 00:11:23.103 { 00:11:23.103 "name": "BaseBdev4", 00:11:23.103 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:23.103 "is_configured": true, 00:11:23.103 "data_offset": 2048, 00:11:23.103 "data_size": 63488 00:11:23.103 } 00:11:23.103 ] 00:11:23.103 }' 00:11:23.103 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.104 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.364 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.364 03:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.364 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.364 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.364 03:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.626 [2024-11-20 03:18:13.020291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.626 "name": "Existed_Raid", 00:11:23.626 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:23.626 "strip_size_kb": 0, 00:11:23.626 "state": "configuring", 00:11:23.626 "raid_level": "raid1", 00:11:23.626 "superblock": true, 00:11:23.626 "num_base_bdevs": 4, 00:11:23.626 "num_base_bdevs_discovered": 3, 00:11:23.626 "num_base_bdevs_operational": 4, 00:11:23.626 "base_bdevs_list": [ 00:11:23.626 { 00:11:23.626 "name": null, 00:11:23.626 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:23.626 "is_configured": false, 00:11:23.626 "data_offset": 0, 00:11:23.626 "data_size": 63488 00:11:23.626 }, 00:11:23.626 { 00:11:23.626 "name": "BaseBdev2", 00:11:23.626 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:23.626 "is_configured": true, 00:11:23.626 "data_offset": 2048, 00:11:23.626 "data_size": 63488 00:11:23.626 }, 00:11:23.626 { 00:11:23.626 "name": "BaseBdev3", 00:11:23.626 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:23.626 "is_configured": true, 00:11:23.626 "data_offset": 2048, 00:11:23.626 "data_size": 63488 00:11:23.626 }, 00:11:23.626 { 00:11:23.626 "name": "BaseBdev4", 00:11:23.626 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:23.626 "is_configured": true, 00:11:23.626 "data_offset": 2048, 00:11:23.626 "data_size": 63488 00:11:23.626 } 00:11:23.626 ] 00:11:23.626 }' 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.626 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.885 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.885 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.886 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d820a81b-9d77-4def-9c40-3a8303f7917d 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.146 [2024-11-20 03:18:13.573316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:24.146 [2024-11-20 03:18:13.573561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:24.146 [2024-11-20 03:18:13.573577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.146 [2024-11-20 03:18:13.573872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:24.146 [2024-11-20 03:18:13.574031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:24.146 [2024-11-20 03:18:13.574041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:24.146 NewBaseBdev 00:11:24.146 [2024-11-20 03:18:13.574192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.146 [ 00:11:24.146 { 00:11:24.146 "name": "NewBaseBdev", 00:11:24.146 "aliases": [ 00:11:24.146 "d820a81b-9d77-4def-9c40-3a8303f7917d" 00:11:24.146 ], 00:11:24.146 "product_name": "Malloc disk", 00:11:24.146 "block_size": 512, 00:11:24.146 "num_blocks": 65536, 00:11:24.146 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:24.146 "assigned_rate_limits": { 00:11:24.146 "rw_ios_per_sec": 0, 00:11:24.146 "rw_mbytes_per_sec": 0, 00:11:24.146 "r_mbytes_per_sec": 0, 00:11:24.146 "w_mbytes_per_sec": 0 00:11:24.146 }, 00:11:24.146 "claimed": true, 00:11:24.146 "claim_type": "exclusive_write", 00:11:24.146 "zoned": false, 00:11:24.146 "supported_io_types": { 00:11:24.146 "read": true, 00:11:24.146 "write": true, 00:11:24.146 "unmap": true, 00:11:24.146 "flush": true, 00:11:24.146 "reset": true, 00:11:24.146 "nvme_admin": false, 00:11:24.146 "nvme_io": false, 00:11:24.146 "nvme_io_md": false, 00:11:24.146 "write_zeroes": true, 00:11:24.146 "zcopy": true, 00:11:24.146 "get_zone_info": false, 00:11:24.146 "zone_management": false, 00:11:24.146 "zone_append": false, 00:11:24.146 "compare": false, 00:11:24.146 "compare_and_write": false, 00:11:24.146 "abort": true, 00:11:24.146 "seek_hole": false, 00:11:24.146 "seek_data": false, 00:11:24.146 "copy": true, 00:11:24.146 "nvme_iov_md": false 00:11:24.146 }, 00:11:24.146 "memory_domains": [ 00:11:24.146 { 00:11:24.146 "dma_device_id": "system", 00:11:24.146 "dma_device_type": 1 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.146 "dma_device_type": 2 00:11:24.146 } 00:11:24.146 ], 00:11:24.146 "driver_specific": {} 00:11:24.146 } 00:11:24.146 ] 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.146 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.147 "name": "Existed_Raid", 00:11:24.147 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:24.147 "strip_size_kb": 0, 00:11:24.147 "state": "online", 00:11:24.147 "raid_level": "raid1", 00:11:24.147 "superblock": true, 00:11:24.147 "num_base_bdevs": 4, 00:11:24.147 "num_base_bdevs_discovered": 4, 00:11:24.147 "num_base_bdevs_operational": 4, 00:11:24.147 "base_bdevs_list": [ 00:11:24.147 { 00:11:24.147 "name": "NewBaseBdev", 00:11:24.147 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:24.147 "is_configured": true, 00:11:24.147 "data_offset": 2048, 00:11:24.147 "data_size": 63488 00:11:24.147 }, 00:11:24.147 { 00:11:24.147 "name": "BaseBdev2", 00:11:24.147 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:24.147 "is_configured": true, 00:11:24.147 "data_offset": 2048, 00:11:24.147 "data_size": 63488 00:11:24.147 }, 00:11:24.147 { 00:11:24.147 "name": "BaseBdev3", 00:11:24.147 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:24.147 "is_configured": true, 00:11:24.147 "data_offset": 2048, 00:11:24.147 "data_size": 63488 00:11:24.147 }, 00:11:24.147 { 00:11:24.147 "name": "BaseBdev4", 00:11:24.147 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:24.147 "is_configured": true, 00:11:24.147 "data_offset": 2048, 00:11:24.147 "data_size": 63488 00:11:24.147 } 00:11:24.147 ] 00:11:24.147 }' 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.147 03:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.716 [2024-11-20 03:18:14.088934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.716 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.716 "name": "Existed_Raid", 00:11:24.716 "aliases": [ 00:11:24.716 "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce" 00:11:24.716 ], 00:11:24.716 "product_name": "Raid Volume", 00:11:24.716 "block_size": 512, 00:11:24.716 "num_blocks": 63488, 00:11:24.716 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:24.716 "assigned_rate_limits": { 00:11:24.716 "rw_ios_per_sec": 0, 00:11:24.716 "rw_mbytes_per_sec": 0, 00:11:24.716 "r_mbytes_per_sec": 0, 00:11:24.716 "w_mbytes_per_sec": 0 00:11:24.716 }, 00:11:24.716 "claimed": false, 00:11:24.716 "zoned": false, 00:11:24.716 "supported_io_types": { 00:11:24.716 "read": true, 00:11:24.716 "write": true, 00:11:24.716 "unmap": false, 00:11:24.716 "flush": false, 00:11:24.716 "reset": true, 00:11:24.716 "nvme_admin": false, 00:11:24.716 "nvme_io": false, 00:11:24.716 "nvme_io_md": false, 00:11:24.716 "write_zeroes": true, 00:11:24.716 "zcopy": false, 00:11:24.716 "get_zone_info": false, 00:11:24.716 "zone_management": false, 00:11:24.716 "zone_append": false, 00:11:24.716 "compare": false, 00:11:24.716 "compare_and_write": false, 00:11:24.716 "abort": false, 00:11:24.716 "seek_hole": false, 00:11:24.716 "seek_data": false, 00:11:24.716 "copy": false, 00:11:24.717 "nvme_iov_md": false 00:11:24.717 }, 00:11:24.717 "memory_domains": [ 00:11:24.717 { 00:11:24.717 "dma_device_id": "system", 00:11:24.717 "dma_device_type": 1 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.717 "dma_device_type": 2 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "system", 00:11:24.717 "dma_device_type": 1 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.717 "dma_device_type": 2 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "system", 00:11:24.717 "dma_device_type": 1 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.717 "dma_device_type": 2 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "system", 00:11:24.717 "dma_device_type": 1 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.717 "dma_device_type": 2 00:11:24.717 } 00:11:24.717 ], 00:11:24.717 "driver_specific": { 00:11:24.717 "raid": { 00:11:24.717 "uuid": "b7ae7d3b-0cd6-4314-8463-50f3b41e65ce", 00:11:24.717 "strip_size_kb": 0, 00:11:24.717 "state": "online", 00:11:24.717 "raid_level": "raid1", 00:11:24.717 "superblock": true, 00:11:24.717 "num_base_bdevs": 4, 00:11:24.717 "num_base_bdevs_discovered": 4, 00:11:24.717 "num_base_bdevs_operational": 4, 00:11:24.717 "base_bdevs_list": [ 00:11:24.717 { 00:11:24.717 "name": "NewBaseBdev", 00:11:24.717 "uuid": "d820a81b-9d77-4def-9c40-3a8303f7917d", 00:11:24.717 "is_configured": true, 00:11:24.717 "data_offset": 2048, 00:11:24.717 "data_size": 63488 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "name": "BaseBdev2", 00:11:24.717 "uuid": "9a5f7d65-f910-43df-bc29-4a5c1a381201", 00:11:24.717 "is_configured": true, 00:11:24.717 "data_offset": 2048, 00:11:24.717 "data_size": 63488 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "name": "BaseBdev3", 00:11:24.717 "uuid": "5f2fbfbb-f7be-4a46-a390-aeb521b74237", 00:11:24.717 "is_configured": true, 00:11:24.717 "data_offset": 2048, 00:11:24.717 "data_size": 63488 00:11:24.717 }, 00:11:24.717 { 00:11:24.717 "name": "BaseBdev4", 00:11:24.717 "uuid": "b8c2735b-562e-4f9a-a94a-9be53aab2033", 00:11:24.717 "is_configured": true, 00:11:24.717 "data_offset": 2048, 00:11:24.717 "data_size": 63488 00:11:24.717 } 00:11:24.717 ] 00:11:24.717 } 00:11:24.717 } 00:11:24.717 }' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:24.717 BaseBdev2 00:11:24.717 BaseBdev3 00:11:24.717 BaseBdev4' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.717 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.978 [2024-11-20 03:18:14.407991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.978 [2024-11-20 03:18:14.408022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.978 [2024-11-20 03:18:14.408104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.978 [2024-11-20 03:18:14.408391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.978 [2024-11-20 03:18:14.408404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73691 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73691 ']' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73691 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73691 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.978 killing process with pid 73691 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73691' 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73691 00:11:24.978 [2024-11-20 03:18:14.454855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.978 03:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73691 00:11:25.238 [2024-11-20 03:18:14.865145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.619 03:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:26.619 00:11:26.619 real 0m11.581s 00:11:26.619 user 0m18.413s 00:11:26.619 sys 0m2.073s 00:11:26.619 03:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.619 03:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.619 ************************************ 00:11:26.619 END TEST raid_state_function_test_sb 00:11:26.619 ************************************ 00:11:26.619 03:18:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:26.619 03:18:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:26.619 03:18:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.619 03:18:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.619 ************************************ 00:11:26.619 START TEST raid_superblock_test 00:11:26.619 ************************************ 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74362 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74362 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74362 ']' 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.619 03:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.619 [2024-11-20 03:18:16.161103] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:26.619 [2024-11-20 03:18:16.161226] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74362 ] 00:11:26.879 [2024-11-20 03:18:16.336336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.879 [2024-11-20 03:18:16.458335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.163 [2024-11-20 03:18:16.662226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.163 [2024-11-20 03:18:16.662271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.424 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 malloc1 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-20 03:18:17.071128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.685 [2024-11-20 03:18:17.071267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.685 [2024-11-20 03:18:17.071313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:27.685 [2024-11-20 03:18:17.071348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.685 [2024-11-20 03:18:17.073680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.685 [2024-11-20 03:18:17.073771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.685 pt1 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 malloc2 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-20 03:18:17.131471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.685 [2024-11-20 03:18:17.131598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.685 [2024-11-20 03:18:17.131636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:27.685 [2024-11-20 03:18:17.131647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.685 [2024-11-20 03:18:17.133980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.685 [2024-11-20 03:18:17.134018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.685 pt2 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 malloc3 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-20 03:18:17.198848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.685 [2024-11-20 03:18:17.198975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.685 [2024-11-20 03:18:17.199024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:27.685 [2024-11-20 03:18:17.199066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.685 [2024-11-20 03:18:17.201457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.685 [2024-11-20 03:18:17.201531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.685 pt3 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 malloc4 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-20 03:18:17.258214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.685 [2024-11-20 03:18:17.258333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.685 [2024-11-20 03:18:17.258390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:27.685 [2024-11-20 03:18:17.258427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.685 [2024-11-20 03:18:17.260588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.685 [2024-11-20 03:18:17.260686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.685 pt4 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-20 03:18:17.270237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.685 [2024-11-20 03:18:17.272125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.685 [2024-11-20 03:18:17.272190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.685 [2024-11-20 03:18:17.272232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.685 [2024-11-20 03:18:17.272425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:27.686 [2024-11-20 03:18:17.272441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.686 [2024-11-20 03:18:17.272756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.686 [2024-11-20 03:18:17.272936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:27.686 [2024-11-20 03:18:17.272959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:27.686 [2024-11-20 03:18:17.273165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.686 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.945 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.945 "name": "raid_bdev1", 00:11:27.945 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:27.945 "strip_size_kb": 0, 00:11:27.945 "state": "online", 00:11:27.945 "raid_level": "raid1", 00:11:27.945 "superblock": true, 00:11:27.945 "num_base_bdevs": 4, 00:11:27.945 "num_base_bdevs_discovered": 4, 00:11:27.945 "num_base_bdevs_operational": 4, 00:11:27.945 "base_bdevs_list": [ 00:11:27.945 { 00:11:27.945 "name": "pt1", 00:11:27.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.945 "is_configured": true, 00:11:27.945 "data_offset": 2048, 00:11:27.945 "data_size": 63488 00:11:27.945 }, 00:11:27.945 { 00:11:27.945 "name": "pt2", 00:11:27.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.945 "is_configured": true, 00:11:27.945 "data_offset": 2048, 00:11:27.945 "data_size": 63488 00:11:27.945 }, 00:11:27.945 { 00:11:27.945 "name": "pt3", 00:11:27.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.945 "is_configured": true, 00:11:27.945 "data_offset": 2048, 00:11:27.945 "data_size": 63488 00:11:27.945 }, 00:11:27.945 { 00:11:27.945 "name": "pt4", 00:11:27.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.945 "is_configured": true, 00:11:27.945 "data_offset": 2048, 00:11:27.945 "data_size": 63488 00:11:27.945 } 00:11:27.945 ] 00:11:27.945 }' 00:11:27.945 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.945 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.205 [2024-11-20 03:18:17.673866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.205 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.205 "name": "raid_bdev1", 00:11:28.205 "aliases": [ 00:11:28.205 "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb" 00:11:28.205 ], 00:11:28.205 "product_name": "Raid Volume", 00:11:28.205 "block_size": 512, 00:11:28.205 "num_blocks": 63488, 00:11:28.205 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:28.205 "assigned_rate_limits": { 00:11:28.205 "rw_ios_per_sec": 0, 00:11:28.205 "rw_mbytes_per_sec": 0, 00:11:28.205 "r_mbytes_per_sec": 0, 00:11:28.205 "w_mbytes_per_sec": 0 00:11:28.205 }, 00:11:28.205 "claimed": false, 00:11:28.205 "zoned": false, 00:11:28.205 "supported_io_types": { 00:11:28.205 "read": true, 00:11:28.205 "write": true, 00:11:28.205 "unmap": false, 00:11:28.205 "flush": false, 00:11:28.205 "reset": true, 00:11:28.205 "nvme_admin": false, 00:11:28.205 "nvme_io": false, 00:11:28.205 "nvme_io_md": false, 00:11:28.205 "write_zeroes": true, 00:11:28.205 "zcopy": false, 00:11:28.205 "get_zone_info": false, 00:11:28.205 "zone_management": false, 00:11:28.205 "zone_append": false, 00:11:28.206 "compare": false, 00:11:28.206 "compare_and_write": false, 00:11:28.206 "abort": false, 00:11:28.206 "seek_hole": false, 00:11:28.206 "seek_data": false, 00:11:28.206 "copy": false, 00:11:28.206 "nvme_iov_md": false 00:11:28.206 }, 00:11:28.206 "memory_domains": [ 00:11:28.206 { 00:11:28.206 "dma_device_id": "system", 00:11:28.206 "dma_device_type": 1 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.206 "dma_device_type": 2 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "system", 00:11:28.206 "dma_device_type": 1 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.206 "dma_device_type": 2 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "system", 00:11:28.206 "dma_device_type": 1 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.206 "dma_device_type": 2 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "system", 00:11:28.206 "dma_device_type": 1 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.206 "dma_device_type": 2 00:11:28.206 } 00:11:28.206 ], 00:11:28.206 "driver_specific": { 00:11:28.206 "raid": { 00:11:28.206 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:28.206 "strip_size_kb": 0, 00:11:28.206 "state": "online", 00:11:28.206 "raid_level": "raid1", 00:11:28.206 "superblock": true, 00:11:28.206 "num_base_bdevs": 4, 00:11:28.206 "num_base_bdevs_discovered": 4, 00:11:28.206 "num_base_bdevs_operational": 4, 00:11:28.206 "base_bdevs_list": [ 00:11:28.206 { 00:11:28.206 "name": "pt1", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "name": "pt2", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "name": "pt3", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 }, 00:11:28.206 { 00:11:28.206 "name": "pt4", 00:11:28.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.206 "is_configured": true, 00:11:28.206 "data_offset": 2048, 00:11:28.206 "data_size": 63488 00:11:28.206 } 00:11:28.206 ] 00:11:28.206 } 00:11:28.206 } 00:11:28.206 }' 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:28.206 pt2 00:11:28.206 pt3 00:11:28.206 pt4' 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.206 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.466 03:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:28.466 [2024-11-20 03:18:18.001248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9fffdc9e-7ae2-4384-acf3-72a441b9a0eb 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9fffdc9e-7ae2-4384-acf3-72a441b9a0eb ']' 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.466 [2024-11-20 03:18:18.048879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.466 [2024-11-20 03:18:18.048954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.466 [2024-11-20 03:18:18.049040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.466 [2024-11-20 03:18:18.049123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.466 [2024-11-20 03:18:18.049156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.466 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.726 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 [2024-11-20 03:18:18.216598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:28.727 [2024-11-20 03:18:18.218665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:28.727 [2024-11-20 03:18:18.218763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:28.727 [2024-11-20 03:18:18.218817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:28.727 [2024-11-20 03:18:18.218913] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:28.727 [2024-11-20 03:18:18.219003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:28.727 [2024-11-20 03:18:18.219025] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:28.727 [2024-11-20 03:18:18.219046] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:28.727 [2024-11-20 03:18:18.219062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.727 [2024-11-20 03:18:18.219074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:28.727 request: 00:11:28.727 { 00:11:28.727 "name": "raid_bdev1", 00:11:28.727 "raid_level": "raid1", 00:11:28.727 "base_bdevs": [ 00:11:28.727 "malloc1", 00:11:28.727 "malloc2", 00:11:28.727 "malloc3", 00:11:28.727 "malloc4" 00:11:28.727 ], 00:11:28.727 "superblock": false, 00:11:28.727 "method": "bdev_raid_create", 00:11:28.727 "req_id": 1 00:11:28.727 } 00:11:28.727 Got JSON-RPC error response 00:11:28.727 response: 00:11:28.727 { 00:11:28.727 "code": -17, 00:11:28.727 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:28.727 } 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 [2024-11-20 03:18:18.284455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.727 [2024-11-20 03:18:18.284569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.727 [2024-11-20 03:18:18.284604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:28.727 [2024-11-20 03:18:18.284645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.727 [2024-11-20 03:18:18.286841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.727 [2024-11-20 03:18:18.286923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.727 [2024-11-20 03:18:18.287044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.727 [2024-11-20 03:18:18.287124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.727 pt1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.727 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.727 "name": "raid_bdev1", 00:11:28.727 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:28.727 "strip_size_kb": 0, 00:11:28.727 "state": "configuring", 00:11:28.727 "raid_level": "raid1", 00:11:28.727 "superblock": true, 00:11:28.727 "num_base_bdevs": 4, 00:11:28.727 "num_base_bdevs_discovered": 1, 00:11:28.727 "num_base_bdevs_operational": 4, 00:11:28.727 "base_bdevs_list": [ 00:11:28.727 { 00:11:28.727 "name": "pt1", 00:11:28.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.727 "is_configured": true, 00:11:28.727 "data_offset": 2048, 00:11:28.727 "data_size": 63488 00:11:28.727 }, 00:11:28.727 { 00:11:28.727 "name": null, 00:11:28.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.727 "is_configured": false, 00:11:28.727 "data_offset": 2048, 00:11:28.727 "data_size": 63488 00:11:28.727 }, 00:11:28.727 { 00:11:28.727 "name": null, 00:11:28.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.727 "is_configured": false, 00:11:28.727 "data_offset": 2048, 00:11:28.727 "data_size": 63488 00:11:28.727 }, 00:11:28.727 { 00:11:28.727 "name": null, 00:11:28.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.728 "is_configured": false, 00:11:28.728 "data_offset": 2048, 00:11:28.728 "data_size": 63488 00:11:28.728 } 00:11:28.728 ] 00:11:28.728 }' 00:11:28.728 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.728 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.296 [2024-11-20 03:18:18.723782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:29.296 [2024-11-20 03:18:18.723931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.296 [2024-11-20 03:18:18.723971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:29.296 [2024-11-20 03:18:18.724005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.296 [2024-11-20 03:18:18.724507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.296 [2024-11-20 03:18:18.724542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:29.296 [2024-11-20 03:18:18.724643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:29.296 [2024-11-20 03:18:18.724681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.296 pt2 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.296 [2024-11-20 03:18:18.731770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.296 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.296 "name": "raid_bdev1", 00:11:29.296 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:29.296 "strip_size_kb": 0, 00:11:29.296 "state": "configuring", 00:11:29.296 "raid_level": "raid1", 00:11:29.296 "superblock": true, 00:11:29.296 "num_base_bdevs": 4, 00:11:29.296 "num_base_bdevs_discovered": 1, 00:11:29.296 "num_base_bdevs_operational": 4, 00:11:29.296 "base_bdevs_list": [ 00:11:29.297 { 00:11:29.297 "name": "pt1", 00:11:29.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.297 "is_configured": true, 00:11:29.297 "data_offset": 2048, 00:11:29.297 "data_size": 63488 00:11:29.297 }, 00:11:29.297 { 00:11:29.297 "name": null, 00:11:29.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.297 "is_configured": false, 00:11:29.297 "data_offset": 0, 00:11:29.297 "data_size": 63488 00:11:29.297 }, 00:11:29.297 { 00:11:29.297 "name": null, 00:11:29.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.297 "is_configured": false, 00:11:29.297 "data_offset": 2048, 00:11:29.297 "data_size": 63488 00:11:29.297 }, 00:11:29.297 { 00:11:29.297 "name": null, 00:11:29.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.297 "is_configured": false, 00:11:29.297 "data_offset": 2048, 00:11:29.297 "data_size": 63488 00:11:29.297 } 00:11:29.297 ] 00:11:29.297 }' 00:11:29.297 03:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.297 03:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.556 [2024-11-20 03:18:19.163006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:29.556 [2024-11-20 03:18:19.163153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.556 [2024-11-20 03:18:19.163187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:29.556 [2024-11-20 03:18:19.163200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.556 [2024-11-20 03:18:19.163715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.556 [2024-11-20 03:18:19.163735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:29.556 [2024-11-20 03:18:19.163823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:29.556 [2024-11-20 03:18:19.163845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.556 pt2 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.556 [2024-11-20 03:18:19.174955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:29.556 [2024-11-20 03:18:19.175007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.556 [2024-11-20 03:18:19.175028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:29.556 [2024-11-20 03:18:19.175036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.556 [2024-11-20 03:18:19.175431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.556 [2024-11-20 03:18:19.175446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:29.556 [2024-11-20 03:18:19.175514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:29.556 [2024-11-20 03:18:19.175533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:29.556 pt3 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.556 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.556 [2024-11-20 03:18:19.186915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:29.556 [2024-11-20 03:18:19.186967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.556 [2024-11-20 03:18:19.186987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:29.556 [2024-11-20 03:18:19.186997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.556 [2024-11-20 03:18:19.187438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.556 [2024-11-20 03:18:19.187455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:29.556 [2024-11-20 03:18:19.187524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:29.556 [2024-11-20 03:18:19.187543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:29.556 [2024-11-20 03:18:19.187727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.556 [2024-11-20 03:18:19.187739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.815 [2024-11-20 03:18:19.188000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:29.815 [2024-11-20 03:18:19.188200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.815 [2024-11-20 03:18:19.188215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:29.815 [2024-11-20 03:18:19.188376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.815 pt4 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.815 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.815 "name": "raid_bdev1", 00:11:29.815 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:29.815 "strip_size_kb": 0, 00:11:29.815 "state": "online", 00:11:29.815 "raid_level": "raid1", 00:11:29.815 "superblock": true, 00:11:29.815 "num_base_bdevs": 4, 00:11:29.815 "num_base_bdevs_discovered": 4, 00:11:29.815 "num_base_bdevs_operational": 4, 00:11:29.815 "base_bdevs_list": [ 00:11:29.816 { 00:11:29.816 "name": "pt1", 00:11:29.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.816 "is_configured": true, 00:11:29.816 "data_offset": 2048, 00:11:29.816 "data_size": 63488 00:11:29.816 }, 00:11:29.816 { 00:11:29.816 "name": "pt2", 00:11:29.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.816 "is_configured": true, 00:11:29.816 "data_offset": 2048, 00:11:29.816 "data_size": 63488 00:11:29.816 }, 00:11:29.816 { 00:11:29.816 "name": "pt3", 00:11:29.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.816 "is_configured": true, 00:11:29.816 "data_offset": 2048, 00:11:29.816 "data_size": 63488 00:11:29.816 }, 00:11:29.816 { 00:11:29.816 "name": "pt4", 00:11:29.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.816 "is_configured": true, 00:11:29.816 "data_offset": 2048, 00:11:29.816 "data_size": 63488 00:11:29.816 } 00:11:29.816 ] 00:11:29.816 }' 00:11:29.816 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.816 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.074 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.075 [2024-11-20 03:18:19.642557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.075 "name": "raid_bdev1", 00:11:30.075 "aliases": [ 00:11:30.075 "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb" 00:11:30.075 ], 00:11:30.075 "product_name": "Raid Volume", 00:11:30.075 "block_size": 512, 00:11:30.075 "num_blocks": 63488, 00:11:30.075 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:30.075 "assigned_rate_limits": { 00:11:30.075 "rw_ios_per_sec": 0, 00:11:30.075 "rw_mbytes_per_sec": 0, 00:11:30.075 "r_mbytes_per_sec": 0, 00:11:30.075 "w_mbytes_per_sec": 0 00:11:30.075 }, 00:11:30.075 "claimed": false, 00:11:30.075 "zoned": false, 00:11:30.075 "supported_io_types": { 00:11:30.075 "read": true, 00:11:30.075 "write": true, 00:11:30.075 "unmap": false, 00:11:30.075 "flush": false, 00:11:30.075 "reset": true, 00:11:30.075 "nvme_admin": false, 00:11:30.075 "nvme_io": false, 00:11:30.075 "nvme_io_md": false, 00:11:30.075 "write_zeroes": true, 00:11:30.075 "zcopy": false, 00:11:30.075 "get_zone_info": false, 00:11:30.075 "zone_management": false, 00:11:30.075 "zone_append": false, 00:11:30.075 "compare": false, 00:11:30.075 "compare_and_write": false, 00:11:30.075 "abort": false, 00:11:30.075 "seek_hole": false, 00:11:30.075 "seek_data": false, 00:11:30.075 "copy": false, 00:11:30.075 "nvme_iov_md": false 00:11:30.075 }, 00:11:30.075 "memory_domains": [ 00:11:30.075 { 00:11:30.075 "dma_device_id": "system", 00:11:30.075 "dma_device_type": 1 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.075 "dma_device_type": 2 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "system", 00:11:30.075 "dma_device_type": 1 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.075 "dma_device_type": 2 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "system", 00:11:30.075 "dma_device_type": 1 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.075 "dma_device_type": 2 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "system", 00:11:30.075 "dma_device_type": 1 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.075 "dma_device_type": 2 00:11:30.075 } 00:11:30.075 ], 00:11:30.075 "driver_specific": { 00:11:30.075 "raid": { 00:11:30.075 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:30.075 "strip_size_kb": 0, 00:11:30.075 "state": "online", 00:11:30.075 "raid_level": "raid1", 00:11:30.075 "superblock": true, 00:11:30.075 "num_base_bdevs": 4, 00:11:30.075 "num_base_bdevs_discovered": 4, 00:11:30.075 "num_base_bdevs_operational": 4, 00:11:30.075 "base_bdevs_list": [ 00:11:30.075 { 00:11:30.075 "name": "pt1", 00:11:30.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:30.075 "is_configured": true, 00:11:30.075 "data_offset": 2048, 00:11:30.075 "data_size": 63488 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "name": "pt2", 00:11:30.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.075 "is_configured": true, 00:11:30.075 "data_offset": 2048, 00:11:30.075 "data_size": 63488 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "name": "pt3", 00:11:30.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.075 "is_configured": true, 00:11:30.075 "data_offset": 2048, 00:11:30.075 "data_size": 63488 00:11:30.075 }, 00:11:30.075 { 00:11:30.075 "name": "pt4", 00:11:30.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:30.075 "is_configured": true, 00:11:30.075 "data_offset": 2048, 00:11:30.075 "data_size": 63488 00:11:30.075 } 00:11:30.075 ] 00:11:30.075 } 00:11:30.075 } 00:11:30.075 }' 00:11:30.075 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:30.335 pt2 00:11:30.335 pt3 00:11:30.335 pt4' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.335 03:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:30.335 [2024-11-20 03:18:19.966021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.594 03:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9fffdc9e-7ae2-4384-acf3-72a441b9a0eb '!=' 9fffdc9e-7ae2-4384-acf3-72a441b9a0eb ']' 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.594 [2024-11-20 03:18:20.017607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.594 "name": "raid_bdev1", 00:11:30.594 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:30.594 "strip_size_kb": 0, 00:11:30.594 "state": "online", 00:11:30.594 "raid_level": "raid1", 00:11:30.594 "superblock": true, 00:11:30.594 "num_base_bdevs": 4, 00:11:30.594 "num_base_bdevs_discovered": 3, 00:11:30.594 "num_base_bdevs_operational": 3, 00:11:30.594 "base_bdevs_list": [ 00:11:30.594 { 00:11:30.594 "name": null, 00:11:30.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.594 "is_configured": false, 00:11:30.594 "data_offset": 0, 00:11:30.594 "data_size": 63488 00:11:30.594 }, 00:11:30.594 { 00:11:30.594 "name": "pt2", 00:11:30.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.594 "is_configured": true, 00:11:30.594 "data_offset": 2048, 00:11:30.594 "data_size": 63488 00:11:30.594 }, 00:11:30.594 { 00:11:30.594 "name": "pt3", 00:11:30.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.594 "is_configured": true, 00:11:30.594 "data_offset": 2048, 00:11:30.594 "data_size": 63488 00:11:30.594 }, 00:11:30.594 { 00:11:30.594 "name": "pt4", 00:11:30.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:30.594 "is_configured": true, 00:11:30.594 "data_offset": 2048, 00:11:30.594 "data_size": 63488 00:11:30.594 } 00:11:30.594 ] 00:11:30.594 }' 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.594 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.853 [2024-11-20 03:18:20.428853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:30.853 [2024-11-20 03:18:20.428954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.853 [2024-11-20 03:18:20.429055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.853 [2024-11-20 03:18:20.429167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.853 [2024-11-20 03:18:20.429206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:30.853 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.854 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.113 [2024-11-20 03:18:20.512733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:31.113 [2024-11-20 03:18:20.512799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.113 [2024-11-20 03:18:20.512835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:31.113 [2024-11-20 03:18:20.512844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.113 [2024-11-20 03:18:20.515154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.113 [2024-11-20 03:18:20.515196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:31.113 [2024-11-20 03:18:20.515290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:31.113 [2024-11-20 03:18:20.515343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:31.113 pt2 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.113 "name": "raid_bdev1", 00:11:31.113 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:31.113 "strip_size_kb": 0, 00:11:31.113 "state": "configuring", 00:11:31.113 "raid_level": "raid1", 00:11:31.113 "superblock": true, 00:11:31.113 "num_base_bdevs": 4, 00:11:31.113 "num_base_bdevs_discovered": 1, 00:11:31.113 "num_base_bdevs_operational": 3, 00:11:31.113 "base_bdevs_list": [ 00:11:31.113 { 00:11:31.113 "name": null, 00:11:31.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.113 "is_configured": false, 00:11:31.113 "data_offset": 2048, 00:11:31.113 "data_size": 63488 00:11:31.113 }, 00:11:31.113 { 00:11:31.113 "name": "pt2", 00:11:31.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.113 "is_configured": true, 00:11:31.113 "data_offset": 2048, 00:11:31.113 "data_size": 63488 00:11:31.113 }, 00:11:31.113 { 00:11:31.113 "name": null, 00:11:31.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.113 "is_configured": false, 00:11:31.113 "data_offset": 2048, 00:11:31.113 "data_size": 63488 00:11:31.113 }, 00:11:31.113 { 00:11:31.113 "name": null, 00:11:31.113 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:31.113 "is_configured": false, 00:11:31.113 "data_offset": 2048, 00:11:31.113 "data_size": 63488 00:11:31.113 } 00:11:31.113 ] 00:11:31.113 }' 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.113 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.373 [2024-11-20 03:18:20.987938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:31.373 [2024-11-20 03:18:20.988094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.373 [2024-11-20 03:18:20.988144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:31.373 [2024-11-20 03:18:20.988183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.373 [2024-11-20 03:18:20.988757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.373 [2024-11-20 03:18:20.988824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:31.373 [2024-11-20 03:18:20.988949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:31.373 [2024-11-20 03:18:20.989003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:31.373 pt3 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.373 03:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.373 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.373 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.633 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.633 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.633 "name": "raid_bdev1", 00:11:31.633 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:31.633 "strip_size_kb": 0, 00:11:31.633 "state": "configuring", 00:11:31.633 "raid_level": "raid1", 00:11:31.633 "superblock": true, 00:11:31.633 "num_base_bdevs": 4, 00:11:31.633 "num_base_bdevs_discovered": 2, 00:11:31.633 "num_base_bdevs_operational": 3, 00:11:31.633 "base_bdevs_list": [ 00:11:31.633 { 00:11:31.633 "name": null, 00:11:31.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.633 "is_configured": false, 00:11:31.633 "data_offset": 2048, 00:11:31.633 "data_size": 63488 00:11:31.633 }, 00:11:31.633 { 00:11:31.633 "name": "pt2", 00:11:31.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.633 "is_configured": true, 00:11:31.633 "data_offset": 2048, 00:11:31.633 "data_size": 63488 00:11:31.633 }, 00:11:31.633 { 00:11:31.633 "name": "pt3", 00:11:31.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.633 "is_configured": true, 00:11:31.633 "data_offset": 2048, 00:11:31.633 "data_size": 63488 00:11:31.633 }, 00:11:31.633 { 00:11:31.633 "name": null, 00:11:31.633 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:31.633 "is_configured": false, 00:11:31.633 "data_offset": 2048, 00:11:31.633 "data_size": 63488 00:11:31.633 } 00:11:31.633 ] 00:11:31.633 }' 00:11:31.633 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.633 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.893 [2024-11-20 03:18:21.463153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:31.893 [2024-11-20 03:18:21.463232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.893 [2024-11-20 03:18:21.463257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:31.893 [2024-11-20 03:18:21.463267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.893 [2024-11-20 03:18:21.463777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.893 [2024-11-20 03:18:21.463795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:31.893 [2024-11-20 03:18:21.463876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:31.893 [2024-11-20 03:18:21.463903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:31.893 [2024-11-20 03:18:21.464082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.893 [2024-11-20 03:18:21.464099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.893 [2024-11-20 03:18:21.464356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:31.893 [2024-11-20 03:18:21.464510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.893 [2024-11-20 03:18:21.464523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:31.893 [2024-11-20 03:18:21.464699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.893 pt4 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.893 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.893 "name": "raid_bdev1", 00:11:31.893 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:31.893 "strip_size_kb": 0, 00:11:31.893 "state": "online", 00:11:31.893 "raid_level": "raid1", 00:11:31.893 "superblock": true, 00:11:31.893 "num_base_bdevs": 4, 00:11:31.893 "num_base_bdevs_discovered": 3, 00:11:31.893 "num_base_bdevs_operational": 3, 00:11:31.893 "base_bdevs_list": [ 00:11:31.893 { 00:11:31.893 "name": null, 00:11:31.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.893 "is_configured": false, 00:11:31.893 "data_offset": 2048, 00:11:31.893 "data_size": 63488 00:11:31.893 }, 00:11:31.893 { 00:11:31.893 "name": "pt2", 00:11:31.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.893 "is_configured": true, 00:11:31.893 "data_offset": 2048, 00:11:31.893 "data_size": 63488 00:11:31.893 }, 00:11:31.893 { 00:11:31.893 "name": "pt3", 00:11:31.894 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.894 "is_configured": true, 00:11:31.894 "data_offset": 2048, 00:11:31.894 "data_size": 63488 00:11:31.894 }, 00:11:31.894 { 00:11:31.894 "name": "pt4", 00:11:31.894 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:31.894 "is_configured": true, 00:11:31.894 "data_offset": 2048, 00:11:31.894 "data_size": 63488 00:11:31.894 } 00:11:31.894 ] 00:11:31.894 }' 00:11:31.894 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.894 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.463 [2024-11-20 03:18:21.938301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.463 [2024-11-20 03:18:21.938388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.463 [2024-11-20 03:18:21.938522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.463 [2024-11-20 03:18:21.938623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.463 [2024-11-20 03:18:21.938720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:32.463 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:32.464 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:32.464 03:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:32.464 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.464 03:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.464 [2024-11-20 03:18:22.006195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.464 [2024-11-20 03:18:22.006324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.464 [2024-11-20 03:18:22.006366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:32.464 [2024-11-20 03:18:22.006421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.464 [2024-11-20 03:18:22.008805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.464 [2024-11-20 03:18:22.008888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.464 [2024-11-20 03:18:22.009001] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:32.464 [2024-11-20 03:18:22.009102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.464 [2024-11-20 03:18:22.009288] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:32.464 [2024-11-20 03:18:22.009348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.464 [2024-11-20 03:18:22.009390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:32.464 [2024-11-20 03:18:22.009511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.464 [2024-11-20 03:18:22.009684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:32.464 pt1 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.464 "name": "raid_bdev1", 00:11:32.464 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:32.464 "strip_size_kb": 0, 00:11:32.464 "state": "configuring", 00:11:32.464 "raid_level": "raid1", 00:11:32.464 "superblock": true, 00:11:32.464 "num_base_bdevs": 4, 00:11:32.464 "num_base_bdevs_discovered": 2, 00:11:32.464 "num_base_bdevs_operational": 3, 00:11:32.464 "base_bdevs_list": [ 00:11:32.464 { 00:11:32.464 "name": null, 00:11:32.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.464 "is_configured": false, 00:11:32.464 "data_offset": 2048, 00:11:32.464 "data_size": 63488 00:11:32.464 }, 00:11:32.464 { 00:11:32.464 "name": "pt2", 00:11:32.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.464 "is_configured": true, 00:11:32.464 "data_offset": 2048, 00:11:32.464 "data_size": 63488 00:11:32.464 }, 00:11:32.464 { 00:11:32.464 "name": "pt3", 00:11:32.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.464 "is_configured": true, 00:11:32.464 "data_offset": 2048, 00:11:32.464 "data_size": 63488 00:11:32.464 }, 00:11:32.464 { 00:11:32.464 "name": null, 00:11:32.464 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:32.464 "is_configured": false, 00:11:32.464 "data_offset": 2048, 00:11:32.464 "data_size": 63488 00:11:32.464 } 00:11:32.464 ] 00:11:32.464 }' 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.464 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.034 [2024-11-20 03:18:22.505346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.034 [2024-11-20 03:18:22.505419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.034 [2024-11-20 03:18:22.505444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:33.034 [2024-11-20 03:18:22.505453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.034 [2024-11-20 03:18:22.505916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.034 [2024-11-20 03:18:22.505934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.034 [2024-11-20 03:18:22.506015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:33.034 [2024-11-20 03:18:22.506043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.034 [2024-11-20 03:18:22.506183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:33.034 [2024-11-20 03:18:22.506192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.034 [2024-11-20 03:18:22.506454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:33.034 [2024-11-20 03:18:22.506627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:33.034 [2024-11-20 03:18:22.506640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:33.034 [2024-11-20 03:18:22.506814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.034 pt4 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.034 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.034 "name": "raid_bdev1", 00:11:33.034 "uuid": "9fffdc9e-7ae2-4384-acf3-72a441b9a0eb", 00:11:33.034 "strip_size_kb": 0, 00:11:33.034 "state": "online", 00:11:33.034 "raid_level": "raid1", 00:11:33.034 "superblock": true, 00:11:33.034 "num_base_bdevs": 4, 00:11:33.034 "num_base_bdevs_discovered": 3, 00:11:33.034 "num_base_bdevs_operational": 3, 00:11:33.034 "base_bdevs_list": [ 00:11:33.034 { 00:11:33.034 "name": null, 00:11:33.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.034 "is_configured": false, 00:11:33.034 "data_offset": 2048, 00:11:33.034 "data_size": 63488 00:11:33.034 }, 00:11:33.034 { 00:11:33.034 "name": "pt2", 00:11:33.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.034 "is_configured": true, 00:11:33.034 "data_offset": 2048, 00:11:33.034 "data_size": 63488 00:11:33.034 }, 00:11:33.034 { 00:11:33.035 "name": "pt3", 00:11:33.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.035 "is_configured": true, 00:11:33.035 "data_offset": 2048, 00:11:33.035 "data_size": 63488 00:11:33.035 }, 00:11:33.035 { 00:11:33.035 "name": "pt4", 00:11:33.035 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.035 "is_configured": true, 00:11:33.035 "data_offset": 2048, 00:11:33.035 "data_size": 63488 00:11:33.035 } 00:11:33.035 ] 00:11:33.035 }' 00:11:33.035 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.035 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.604 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:33.604 03:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:33.604 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.604 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.604 03:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:33.604 [2024-11-20 03:18:23.032801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9fffdc9e-7ae2-4384-acf3-72a441b9a0eb '!=' 9fffdc9e-7ae2-4384-acf3-72a441b9a0eb ']' 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74362 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74362 ']' 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74362 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74362 00:11:33.604 killing process with pid 74362 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74362' 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74362 00:11:33.604 [2024-11-20 03:18:23.118523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.604 [2024-11-20 03:18:23.118642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.604 03:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74362 00:11:33.604 [2024-11-20 03:18:23.118717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.604 [2024-11-20 03:18:23.118729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:34.173 [2024-11-20 03:18:23.520448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.113 03:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:35.113 00:11:35.113 real 0m8.577s 00:11:35.113 user 0m13.519s 00:11:35.113 sys 0m1.557s 00:11:35.113 03:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.113 ************************************ 00:11:35.113 END TEST raid_superblock_test 00:11:35.113 ************************************ 00:11:35.113 03:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.113 03:18:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:35.113 03:18:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.113 03:18:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.113 03:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.113 ************************************ 00:11:35.113 START TEST raid_read_error_test 00:11:35.113 ************************************ 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jODw9r24G1 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74849 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74849 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74849 ']' 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.113 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.373 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.373 03:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.373 [2024-11-20 03:18:24.825749] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:35.373 [2024-11-20 03:18:24.825876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:11:35.373 [2024-11-20 03:18:24.981813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.632 [2024-11-20 03:18:25.098504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.892 [2024-11-20 03:18:25.305820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.892 [2024-11-20 03:18:25.305867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.152 BaseBdev1_malloc 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.152 true 00:11:36.152 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.153 [2024-11-20 03:18:25.774779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.153 [2024-11-20 03:18:25.774845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.153 [2024-11-20 03:18:25.774870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.153 [2024-11-20 03:18:25.774888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.153 [2024-11-20 03:18:25.777167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.153 [2024-11-20 03:18:25.777215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.153 BaseBdev1 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.153 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 BaseBdev2_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 true 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 03:18:25.841315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.414 [2024-11-20 03:18:25.841380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.414 [2024-11-20 03:18:25.841416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.414 [2024-11-20 03:18:25.841427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.414 [2024-11-20 03:18:25.843898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.414 [2024-11-20 03:18:25.843944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.414 BaseBdev2 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 BaseBdev3_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 true 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 03:18:25.920876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.414 [2024-11-20 03:18:25.920946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.414 [2024-11-20 03:18:25.920970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.414 [2024-11-20 03:18:25.920982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.414 [2024-11-20 03:18:25.923329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.414 [2024-11-20 03:18:25.923390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.414 BaseBdev3 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 BaseBdev4_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 true 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 03:18:25.990367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:36.414 [2024-11-20 03:18:25.990436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.414 [2024-11-20 03:18:25.990458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.414 [2024-11-20 03:18:25.990469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.414 [2024-11-20 03:18:25.992781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.414 [2024-11-20 03:18:25.992825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.414 BaseBdev4 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:36.414 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 03:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 [2024-11-20 03:18:26.002424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.415 [2024-11-20 03:18:26.004376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.415 [2024-11-20 03:18:26.004455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.415 [2024-11-20 03:18:26.004521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.415 [2024-11-20 03:18:26.004770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:36.415 [2024-11-20 03:18:26.004785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.415 [2024-11-20 03:18:26.005072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:36.415 [2024-11-20 03:18:26.005266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:36.415 [2024-11-20 03:18:26.005276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:36.415 [2024-11-20 03:18:26.005452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 03:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.682 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.682 "name": "raid_bdev1", 00:11:36.682 "uuid": "d00d61c6-b3ef-4599-8946-6899fc06bfaf", 00:11:36.682 "strip_size_kb": 0, 00:11:36.682 "state": "online", 00:11:36.682 "raid_level": "raid1", 00:11:36.682 "superblock": true, 00:11:36.682 "num_base_bdevs": 4, 00:11:36.682 "num_base_bdevs_discovered": 4, 00:11:36.682 "num_base_bdevs_operational": 4, 00:11:36.682 "base_bdevs_list": [ 00:11:36.682 { 00:11:36.682 "name": "BaseBdev1", 00:11:36.682 "uuid": "d4f05740-a54c-5777-8a14-4c9fb5c889b1", 00:11:36.682 "is_configured": true, 00:11:36.682 "data_offset": 2048, 00:11:36.682 "data_size": 63488 00:11:36.682 }, 00:11:36.682 { 00:11:36.682 "name": "BaseBdev2", 00:11:36.682 "uuid": "64b585ea-414e-51af-bfd3-34d53334ba1f", 00:11:36.682 "is_configured": true, 00:11:36.682 "data_offset": 2048, 00:11:36.682 "data_size": 63488 00:11:36.682 }, 00:11:36.682 { 00:11:36.682 "name": "BaseBdev3", 00:11:36.682 "uuid": "ae7c48b4-d2ee-58da-b794-3b7a1dd24bd5", 00:11:36.682 "is_configured": true, 00:11:36.682 "data_offset": 2048, 00:11:36.682 "data_size": 63488 00:11:36.682 }, 00:11:36.682 { 00:11:36.682 "name": "BaseBdev4", 00:11:36.682 "uuid": "e5ba551f-8e96-51ab-982d-d7eed40dfbcc", 00:11:36.682 "is_configured": true, 00:11:36.682 "data_offset": 2048, 00:11:36.682 "data_size": 63488 00:11:36.682 } 00:11:36.682 ] 00:11:36.682 }' 00:11:36.682 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.682 03:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.973 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:36.973 03:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.232 [2024-11-20 03:18:26.606814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.169 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.170 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.170 "name": "raid_bdev1", 00:11:38.170 "uuid": "d00d61c6-b3ef-4599-8946-6899fc06bfaf", 00:11:38.170 "strip_size_kb": 0, 00:11:38.170 "state": "online", 00:11:38.170 "raid_level": "raid1", 00:11:38.170 "superblock": true, 00:11:38.170 "num_base_bdevs": 4, 00:11:38.170 "num_base_bdevs_discovered": 4, 00:11:38.170 "num_base_bdevs_operational": 4, 00:11:38.170 "base_bdevs_list": [ 00:11:38.170 { 00:11:38.170 "name": "BaseBdev1", 00:11:38.170 "uuid": "d4f05740-a54c-5777-8a14-4c9fb5c889b1", 00:11:38.170 "is_configured": true, 00:11:38.170 "data_offset": 2048, 00:11:38.170 "data_size": 63488 00:11:38.170 }, 00:11:38.170 { 00:11:38.170 "name": "BaseBdev2", 00:11:38.170 "uuid": "64b585ea-414e-51af-bfd3-34d53334ba1f", 00:11:38.170 "is_configured": true, 00:11:38.170 "data_offset": 2048, 00:11:38.170 "data_size": 63488 00:11:38.170 }, 00:11:38.170 { 00:11:38.170 "name": "BaseBdev3", 00:11:38.170 "uuid": "ae7c48b4-d2ee-58da-b794-3b7a1dd24bd5", 00:11:38.170 "is_configured": true, 00:11:38.170 "data_offset": 2048, 00:11:38.170 "data_size": 63488 00:11:38.170 }, 00:11:38.170 { 00:11:38.170 "name": "BaseBdev4", 00:11:38.170 "uuid": "e5ba551f-8e96-51ab-982d-d7eed40dfbcc", 00:11:38.170 "is_configured": true, 00:11:38.170 "data_offset": 2048, 00:11:38.170 "data_size": 63488 00:11:38.170 } 00:11:38.170 ] 00:11:38.170 }' 00:11:38.170 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.170 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.430 [2024-11-20 03:18:27.987567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.430 [2024-11-20 03:18:27.987736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.430 [2024-11-20 03:18:27.990879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.430 [2024-11-20 03:18:27.990952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.430 [2024-11-20 03:18:27.991088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.430 [2024-11-20 03:18:27.991103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:38.430 { 00:11:38.430 "results": [ 00:11:38.430 { 00:11:38.430 "job": "raid_bdev1", 00:11:38.430 "core_mask": "0x1", 00:11:38.430 "workload": "randrw", 00:11:38.430 "percentage": 50, 00:11:38.430 "status": "finished", 00:11:38.430 "queue_depth": 1, 00:11:38.430 "io_size": 131072, 00:11:38.430 "runtime": 1.381328, 00:11:38.430 "iops": 10269.827296630488, 00:11:38.430 "mibps": 1283.728412078811, 00:11:38.430 "io_failed": 0, 00:11:38.430 "io_timeout": 0, 00:11:38.430 "avg_latency_us": 94.5867333375608, 00:11:38.430 "min_latency_us": 24.034934497816593, 00:11:38.430 "max_latency_us": 1523.926637554585 00:11:38.430 } 00:11:38.430 ], 00:11:38.430 "core_count": 1 00:11:38.430 } 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74849 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74849 ']' 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74849 00:11:38.430 03:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74849 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74849' 00:11:38.431 killing process with pid 74849 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74849 00:11:38.431 [2024-11-20 03:18:28.037854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.431 03:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74849 00:11:39.000 [2024-11-20 03:18:28.365345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jODw9r24G1 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:39.937 00:11:39.937 real 0m4.837s 00:11:39.937 user 0m5.833s 00:11:39.937 sys 0m0.581s 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.937 03:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.937 ************************************ 00:11:39.937 END TEST raid_read_error_test 00:11:39.937 ************************************ 00:11:40.197 03:18:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:40.197 03:18:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.197 03:18:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.197 03:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.197 ************************************ 00:11:40.197 START TEST raid_write_error_test 00:11:40.197 ************************************ 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:40.197 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zzEGStdBSe 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75000 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75000 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75000 ']' 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.198 03:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.198 [2024-11-20 03:18:29.733902] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:40.198 [2024-11-20 03:18:29.734104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75000 ] 00:11:40.457 [2024-11-20 03:18:29.906577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.457 [2024-11-20 03:18:30.021635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.717 [2024-11-20 03:18:30.224026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.717 [2024-11-20 03:18:30.224178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.976 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.976 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.976 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.976 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.976 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 BaseBdev1_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 true 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 [2024-11-20 03:18:30.630564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.237 [2024-11-20 03:18:30.630736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.237 [2024-11-20 03:18:30.630790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.237 [2024-11-20 03:18:30.630831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.237 [2024-11-20 03:18:30.633055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.237 [2024-11-20 03:18:30.633136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.237 BaseBdev1 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 BaseBdev2_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 true 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 [2024-11-20 03:18:30.696567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.237 [2024-11-20 03:18:30.696695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.237 [2024-11-20 03:18:30.696733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.237 [2024-11-20 03:18:30.696764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.237 [2024-11-20 03:18:30.698910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.237 [2024-11-20 03:18:30.698987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.237 BaseBdev2 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 BaseBdev3_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 true 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 [2024-11-20 03:18:30.779895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:41.237 [2024-11-20 03:18:30.779963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.237 [2024-11-20 03:18:30.779985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:41.237 [2024-11-20 03:18:30.779997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.237 [2024-11-20 03:18:30.782204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.237 [2024-11-20 03:18:30.782250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:41.237 BaseBdev3 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 BaseBdev4_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 true 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 [2024-11-20 03:18:30.848333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:41.237 [2024-11-20 03:18:30.848478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.237 [2024-11-20 03:18:30.848540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.237 [2024-11-20 03:18:30.848576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.237 [2024-11-20 03:18:30.850949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.237 [2024-11-20 03:18:30.851056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:41.237 BaseBdev4 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.237 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.237 [2024-11-20 03:18:30.860382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.238 [2024-11-20 03:18:30.862257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.238 [2024-11-20 03:18:30.862417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.238 [2024-11-20 03:18:30.862492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.238 [2024-11-20 03:18:30.862745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:41.238 [2024-11-20 03:18:30.862761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.238 [2024-11-20 03:18:30.863038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:41.238 [2024-11-20 03:18:30.863212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:41.238 [2024-11-20 03:18:30.863221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:41.238 [2024-11-20 03:18:30.863403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.238 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.497 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.497 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.498 "name": "raid_bdev1", 00:11:41.498 "uuid": "e27b02fd-5e79-4c39-94cb-f80614423d7e", 00:11:41.498 "strip_size_kb": 0, 00:11:41.498 "state": "online", 00:11:41.498 "raid_level": "raid1", 00:11:41.498 "superblock": true, 00:11:41.498 "num_base_bdevs": 4, 00:11:41.498 "num_base_bdevs_discovered": 4, 00:11:41.498 "num_base_bdevs_operational": 4, 00:11:41.498 "base_bdevs_list": [ 00:11:41.498 { 00:11:41.498 "name": "BaseBdev1", 00:11:41.498 "uuid": "f93284b2-a3b4-5a79-99bc-3aec59f48588", 00:11:41.498 "is_configured": true, 00:11:41.498 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 }, 00:11:41.498 { 00:11:41.498 "name": "BaseBdev2", 00:11:41.498 "uuid": "b709a78f-178f-5348-81f9-dc3fadb2df79", 00:11:41.498 "is_configured": true, 00:11:41.498 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 }, 00:11:41.498 { 00:11:41.498 "name": "BaseBdev3", 00:11:41.498 "uuid": "266d0010-f5f2-54ea-8766-c7746ed1e9db", 00:11:41.498 "is_configured": true, 00:11:41.498 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 }, 00:11:41.498 { 00:11:41.498 "name": "BaseBdev4", 00:11:41.498 "uuid": "dff6c60d-1c4e-5fe9-80fc-19d735672e1b", 00:11:41.498 "is_configured": true, 00:11:41.498 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 } 00:11:41.498 ] 00:11:41.498 }' 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.498 03:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.759 03:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.759 03:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:42.018 [2024-11-20 03:18:31.424498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.958 [2024-11-20 03:18:32.351375] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:42.958 [2024-11-20 03:18:32.351444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.958 [2024-11-20 03:18:32.351705] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.958 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.958 "name": "raid_bdev1", 00:11:42.958 "uuid": "e27b02fd-5e79-4c39-94cb-f80614423d7e", 00:11:42.958 "strip_size_kb": 0, 00:11:42.958 "state": "online", 00:11:42.958 "raid_level": "raid1", 00:11:42.958 "superblock": true, 00:11:42.958 "num_base_bdevs": 4, 00:11:42.958 "num_base_bdevs_discovered": 3, 00:11:42.958 "num_base_bdevs_operational": 3, 00:11:42.958 "base_bdevs_list": [ 00:11:42.958 { 00:11:42.958 "name": null, 00:11:42.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.958 "is_configured": false, 00:11:42.958 "data_offset": 0, 00:11:42.958 "data_size": 63488 00:11:42.958 }, 00:11:42.958 { 00:11:42.958 "name": "BaseBdev2", 00:11:42.958 "uuid": "b709a78f-178f-5348-81f9-dc3fadb2df79", 00:11:42.958 "is_configured": true, 00:11:42.959 "data_offset": 2048, 00:11:42.959 "data_size": 63488 00:11:42.959 }, 00:11:42.959 { 00:11:42.959 "name": "BaseBdev3", 00:11:42.959 "uuid": "266d0010-f5f2-54ea-8766-c7746ed1e9db", 00:11:42.959 "is_configured": true, 00:11:42.959 "data_offset": 2048, 00:11:42.959 "data_size": 63488 00:11:42.959 }, 00:11:42.959 { 00:11:42.959 "name": "BaseBdev4", 00:11:42.959 "uuid": "dff6c60d-1c4e-5fe9-80fc-19d735672e1b", 00:11:42.959 "is_configured": true, 00:11:42.959 "data_offset": 2048, 00:11:42.959 "data_size": 63488 00:11:42.959 } 00:11:42.959 ] 00:11:42.959 }' 00:11:42.959 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.959 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.219 [2024-11-20 03:18:32.795551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.219 [2024-11-20 03:18:32.795701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.219 [2024-11-20 03:18:32.798648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.219 [2024-11-20 03:18:32.798735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.219 [2024-11-20 03:18:32.798885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.219 [2024-11-20 03:18:32.798949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.219 { 00:11:43.219 "results": [ 00:11:43.219 { 00:11:43.219 "job": "raid_bdev1", 00:11:43.219 "core_mask": "0x1", 00:11:43.219 "workload": "randrw", 00:11:43.219 "percentage": 50, 00:11:43.219 "status": "finished", 00:11:43.219 "queue_depth": 1, 00:11:43.219 "io_size": 131072, 00:11:43.219 "runtime": 1.372059, 00:11:43.219 "iops": 11224.007130888685, 00:11:43.219 "mibps": 1403.0008913610857, 00:11:43.219 "io_failed": 0, 00:11:43.219 "io_timeout": 0, 00:11:43.219 "avg_latency_us": 86.35762490784326, 00:11:43.219 "min_latency_us": 23.699563318777294, 00:11:43.219 "max_latency_us": 1509.6174672489083 00:11:43.219 } 00:11:43.219 ], 00:11:43.219 "core_count": 1 00:11:43.219 } 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75000 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75000 ']' 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75000 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75000 00:11:43.219 killing process with pid 75000 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75000' 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75000 00:11:43.219 03:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75000 00:11:43.219 [2024-11-20 03:18:32.843064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.788 [2024-11-20 03:18:33.175408] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zzEGStdBSe 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:44.727 03:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:44.727 00:11:44.727 real 0m4.732s 00:11:44.727 user 0m5.603s 00:11:44.727 sys 0m0.567s 00:11:44.985 ************************************ 00:11:44.985 END TEST raid_write_error_test 00:11:44.985 ************************************ 00:11:44.985 03:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.985 03:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.985 03:18:34 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:44.985 03:18:34 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:44.985 03:18:34 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:44.985 03:18:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:44.985 03:18:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.985 03:18:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.985 ************************************ 00:11:44.985 START TEST raid_rebuild_test 00:11:44.985 ************************************ 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:44.985 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:44.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75145 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75145 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75145 ']' 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.986 03:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.986 [2024-11-20 03:18:34.530320] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:44.986 [2024-11-20 03:18:34.530553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75145 ] 00:11:44.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:44.986 Zero copy mechanism will not be used. 00:11:45.244 [2024-11-20 03:18:34.706540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.244 [2024-11-20 03:18:34.824081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.503 [2024-11-20 03:18:35.033619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.503 [2024-11-20 03:18:35.033774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.761 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.761 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.761 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.761 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.761 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.761 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.020 BaseBdev1_malloc 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.020 [2024-11-20 03:18:35.432121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:46.020 [2024-11-20 03:18:35.432257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.020 [2024-11-20 03:18:35.432288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:46.020 [2024-11-20 03:18:35.432300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.020 [2024-11-20 03:18:35.434415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.020 [2024-11-20 03:18:35.434494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.020 BaseBdev1 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.020 BaseBdev2_malloc 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.020 [2024-11-20 03:18:35.487139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:46.020 [2024-11-20 03:18:35.487210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.020 [2024-11-20 03:18:35.487231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:46.020 [2024-11-20 03:18:35.487243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.020 [2024-11-20 03:18:35.489338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.020 [2024-11-20 03:18:35.489447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.020 BaseBdev2 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.020 spare_malloc 00:11:46.020 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.021 spare_delay 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.021 [2024-11-20 03:18:35.562788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:46.021 [2024-11-20 03:18:35.562854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.021 [2024-11-20 03:18:35.562877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:46.021 [2024-11-20 03:18:35.562888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.021 [2024-11-20 03:18:35.565154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.021 [2024-11-20 03:18:35.565285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:46.021 spare 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.021 [2024-11-20 03:18:35.574827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.021 [2024-11-20 03:18:35.576675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.021 [2024-11-20 03:18:35.576767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:46.021 [2024-11-20 03:18:35.576781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:46.021 [2024-11-20 03:18:35.577043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.021 [2024-11-20 03:18:35.577220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:46.021 [2024-11-20 03:18:35.577231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:46.021 [2024-11-20 03:18:35.577405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.021 "name": "raid_bdev1", 00:11:46.021 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:46.021 "strip_size_kb": 0, 00:11:46.021 "state": "online", 00:11:46.021 "raid_level": "raid1", 00:11:46.021 "superblock": false, 00:11:46.021 "num_base_bdevs": 2, 00:11:46.021 "num_base_bdevs_discovered": 2, 00:11:46.021 "num_base_bdevs_operational": 2, 00:11:46.021 "base_bdevs_list": [ 00:11:46.021 { 00:11:46.021 "name": "BaseBdev1", 00:11:46.021 "uuid": "66e2441c-90be-51f2-8a20-8ade921f177b", 00:11:46.021 "is_configured": true, 00:11:46.021 "data_offset": 0, 00:11:46.021 "data_size": 65536 00:11:46.021 }, 00:11:46.021 { 00:11:46.021 "name": "BaseBdev2", 00:11:46.021 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:46.021 "is_configured": true, 00:11:46.021 "data_offset": 0, 00:11:46.021 "data_size": 65536 00:11:46.021 } 00:11:46.021 ] 00:11:46.021 }' 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.021 03:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.591 [2024-11-20 03:18:36.018572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.591 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:46.851 [2024-11-20 03:18:36.317765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.851 /dev/nbd0 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.851 1+0 records in 00:11:46.851 1+0 records out 00:11:46.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444803 s, 9.2 MB/s 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:46.851 03:18:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:51.066 65536+0 records in 00:11:51.066 65536+0 records out 00:11:51.066 33554432 bytes (34 MB, 32 MiB) copied, 4.2179 s, 8.0 MB/s 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.066 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:51.325 [2024-11-20 03:18:40.810248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.325 [2024-11-20 03:18:40.846298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.325 "name": "raid_bdev1", 00:11:51.325 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:51.325 "strip_size_kb": 0, 00:11:51.325 "state": "online", 00:11:51.325 "raid_level": "raid1", 00:11:51.325 "superblock": false, 00:11:51.325 "num_base_bdevs": 2, 00:11:51.325 "num_base_bdevs_discovered": 1, 00:11:51.325 "num_base_bdevs_operational": 1, 00:11:51.325 "base_bdevs_list": [ 00:11:51.325 { 00:11:51.325 "name": null, 00:11:51.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.325 "is_configured": false, 00:11:51.325 "data_offset": 0, 00:11:51.325 "data_size": 65536 00:11:51.325 }, 00:11:51.325 { 00:11:51.325 "name": "BaseBdev2", 00:11:51.325 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:51.325 "is_configured": true, 00:11:51.325 "data_offset": 0, 00:11:51.325 "data_size": 65536 00:11:51.325 } 00:11:51.325 ] 00:11:51.325 }' 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.325 03:18:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.963 03:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.963 03:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.963 03:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.963 [2024-11-20 03:18:41.321511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.963 [2024-11-20 03:18:41.339580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:51.963 03:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.963 03:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:51.963 [2024-11-20 03:18:41.341731] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.898 "name": "raid_bdev1", 00:11:52.898 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:52.898 "strip_size_kb": 0, 00:11:52.898 "state": "online", 00:11:52.898 "raid_level": "raid1", 00:11:52.898 "superblock": false, 00:11:52.898 "num_base_bdevs": 2, 00:11:52.898 "num_base_bdevs_discovered": 2, 00:11:52.898 "num_base_bdevs_operational": 2, 00:11:52.898 "process": { 00:11:52.898 "type": "rebuild", 00:11:52.898 "target": "spare", 00:11:52.898 "progress": { 00:11:52.898 "blocks": 20480, 00:11:52.898 "percent": 31 00:11:52.898 } 00:11:52.898 }, 00:11:52.898 "base_bdevs_list": [ 00:11:52.898 { 00:11:52.898 "name": "spare", 00:11:52.898 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:52.898 "is_configured": true, 00:11:52.898 "data_offset": 0, 00:11:52.898 "data_size": 65536 00:11:52.898 }, 00:11:52.898 { 00:11:52.898 "name": "BaseBdev2", 00:11:52.898 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:52.898 "is_configured": true, 00:11:52.898 "data_offset": 0, 00:11:52.898 "data_size": 65536 00:11:52.898 } 00:11:52.898 ] 00:11:52.898 }' 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.898 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.898 [2024-11-20 03:18:42.501182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.158 [2024-11-20 03:18:42.547484] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.158 [2024-11-20 03:18:42.547570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.158 [2024-11-20 03:18:42.547586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.158 [2024-11-20 03:18:42.547597] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.158 "name": "raid_bdev1", 00:11:53.158 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:53.158 "strip_size_kb": 0, 00:11:53.158 "state": "online", 00:11:53.158 "raid_level": "raid1", 00:11:53.158 "superblock": false, 00:11:53.158 "num_base_bdevs": 2, 00:11:53.158 "num_base_bdevs_discovered": 1, 00:11:53.158 "num_base_bdevs_operational": 1, 00:11:53.158 "base_bdevs_list": [ 00:11:53.158 { 00:11:53.158 "name": null, 00:11:53.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.158 "is_configured": false, 00:11:53.158 "data_offset": 0, 00:11:53.158 "data_size": 65536 00:11:53.158 }, 00:11:53.158 { 00:11:53.158 "name": "BaseBdev2", 00:11:53.158 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:53.158 "is_configured": true, 00:11:53.158 "data_offset": 0, 00:11:53.158 "data_size": 65536 00:11:53.158 } 00:11:53.158 ] 00:11:53.158 }' 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.158 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.417 03:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.417 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.417 "name": "raid_bdev1", 00:11:53.417 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:53.417 "strip_size_kb": 0, 00:11:53.417 "state": "online", 00:11:53.417 "raid_level": "raid1", 00:11:53.417 "superblock": false, 00:11:53.417 "num_base_bdevs": 2, 00:11:53.417 "num_base_bdevs_discovered": 1, 00:11:53.417 "num_base_bdevs_operational": 1, 00:11:53.417 "base_bdevs_list": [ 00:11:53.417 { 00:11:53.417 "name": null, 00:11:53.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.417 "is_configured": false, 00:11:53.417 "data_offset": 0, 00:11:53.417 "data_size": 65536 00:11:53.417 }, 00:11:53.417 { 00:11:53.417 "name": "BaseBdev2", 00:11:53.417 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:53.417 "is_configured": true, 00:11:53.417 "data_offset": 0, 00:11:53.417 "data_size": 65536 00:11:53.418 } 00:11:53.418 ] 00:11:53.418 }' 00:11:53.418 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.677 [2024-11-20 03:18:43.135498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:53.677 [2024-11-20 03:18:43.152299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.677 03:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:53.677 [2024-11-20 03:18:43.154228] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:54.615 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.615 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.615 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.615 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.616 "name": "raid_bdev1", 00:11:54.616 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:54.616 "strip_size_kb": 0, 00:11:54.616 "state": "online", 00:11:54.616 "raid_level": "raid1", 00:11:54.616 "superblock": false, 00:11:54.616 "num_base_bdevs": 2, 00:11:54.616 "num_base_bdevs_discovered": 2, 00:11:54.616 "num_base_bdevs_operational": 2, 00:11:54.616 "process": { 00:11:54.616 "type": "rebuild", 00:11:54.616 "target": "spare", 00:11:54.616 "progress": { 00:11:54.616 "blocks": 20480, 00:11:54.616 "percent": 31 00:11:54.616 } 00:11:54.616 }, 00:11:54.616 "base_bdevs_list": [ 00:11:54.616 { 00:11:54.616 "name": "spare", 00:11:54.616 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:54.616 "is_configured": true, 00:11:54.616 "data_offset": 0, 00:11:54.616 "data_size": 65536 00:11:54.616 }, 00:11:54.616 { 00:11:54.616 "name": "BaseBdev2", 00:11:54.616 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:54.616 "is_configured": true, 00:11:54.616 "data_offset": 0, 00:11:54.616 "data_size": 65536 00:11:54.616 } 00:11:54.616 ] 00:11:54.616 }' 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.616 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.875 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.876 "name": "raid_bdev1", 00:11:54.876 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:54.876 "strip_size_kb": 0, 00:11:54.876 "state": "online", 00:11:54.876 "raid_level": "raid1", 00:11:54.876 "superblock": false, 00:11:54.876 "num_base_bdevs": 2, 00:11:54.876 "num_base_bdevs_discovered": 2, 00:11:54.876 "num_base_bdevs_operational": 2, 00:11:54.876 "process": { 00:11:54.876 "type": "rebuild", 00:11:54.876 "target": "spare", 00:11:54.876 "progress": { 00:11:54.876 "blocks": 22528, 00:11:54.876 "percent": 34 00:11:54.876 } 00:11:54.876 }, 00:11:54.876 "base_bdevs_list": [ 00:11:54.876 { 00:11:54.876 "name": "spare", 00:11:54.876 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:54.876 "is_configured": true, 00:11:54.876 "data_offset": 0, 00:11:54.876 "data_size": 65536 00:11:54.876 }, 00:11:54.876 { 00:11:54.876 "name": "BaseBdev2", 00:11:54.876 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:54.876 "is_configured": true, 00:11:54.876 "data_offset": 0, 00:11:54.876 "data_size": 65536 00:11:54.876 } 00:11:54.876 ] 00:11:54.876 }' 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.876 03:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.816 03:18:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.076 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.076 "name": "raid_bdev1", 00:11:56.076 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:56.076 "strip_size_kb": 0, 00:11:56.076 "state": "online", 00:11:56.076 "raid_level": "raid1", 00:11:56.076 "superblock": false, 00:11:56.076 "num_base_bdevs": 2, 00:11:56.076 "num_base_bdevs_discovered": 2, 00:11:56.076 "num_base_bdevs_operational": 2, 00:11:56.076 "process": { 00:11:56.076 "type": "rebuild", 00:11:56.076 "target": "spare", 00:11:56.076 "progress": { 00:11:56.076 "blocks": 45056, 00:11:56.076 "percent": 68 00:11:56.076 } 00:11:56.076 }, 00:11:56.076 "base_bdevs_list": [ 00:11:56.076 { 00:11:56.076 "name": "spare", 00:11:56.076 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:56.076 "is_configured": true, 00:11:56.076 "data_offset": 0, 00:11:56.076 "data_size": 65536 00:11:56.076 }, 00:11:56.076 { 00:11:56.076 "name": "BaseBdev2", 00:11:56.076 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:56.076 "is_configured": true, 00:11:56.076 "data_offset": 0, 00:11:56.076 "data_size": 65536 00:11:56.076 } 00:11:56.076 ] 00:11:56.076 }' 00:11:56.076 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.076 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.076 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.076 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.076 03:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:57.017 [2024-11-20 03:18:46.368973] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:57.017 [2024-11-20 03:18:46.369135] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:57.017 [2024-11-20 03:18:46.369188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.017 "name": "raid_bdev1", 00:11:57.017 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:57.017 "strip_size_kb": 0, 00:11:57.017 "state": "online", 00:11:57.017 "raid_level": "raid1", 00:11:57.017 "superblock": false, 00:11:57.017 "num_base_bdevs": 2, 00:11:57.017 "num_base_bdevs_discovered": 2, 00:11:57.017 "num_base_bdevs_operational": 2, 00:11:57.017 "base_bdevs_list": [ 00:11:57.017 { 00:11:57.017 "name": "spare", 00:11:57.017 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:57.017 "is_configured": true, 00:11:57.017 "data_offset": 0, 00:11:57.017 "data_size": 65536 00:11:57.017 }, 00:11:57.017 { 00:11:57.017 "name": "BaseBdev2", 00:11:57.017 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:57.017 "is_configured": true, 00:11:57.017 "data_offset": 0, 00:11:57.017 "data_size": 65536 00:11:57.017 } 00:11:57.017 ] 00:11:57.017 }' 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:57.017 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.277 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.277 "name": "raid_bdev1", 00:11:57.277 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:57.277 "strip_size_kb": 0, 00:11:57.277 "state": "online", 00:11:57.277 "raid_level": "raid1", 00:11:57.277 "superblock": false, 00:11:57.277 "num_base_bdevs": 2, 00:11:57.277 "num_base_bdevs_discovered": 2, 00:11:57.277 "num_base_bdevs_operational": 2, 00:11:57.277 "base_bdevs_list": [ 00:11:57.277 { 00:11:57.277 "name": "spare", 00:11:57.277 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:57.277 "is_configured": true, 00:11:57.277 "data_offset": 0, 00:11:57.277 "data_size": 65536 00:11:57.277 }, 00:11:57.277 { 00:11:57.277 "name": "BaseBdev2", 00:11:57.278 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:57.278 "is_configured": true, 00:11:57.278 "data_offset": 0, 00:11:57.278 "data_size": 65536 00:11:57.278 } 00:11:57.278 ] 00:11:57.278 }' 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.278 "name": "raid_bdev1", 00:11:57.278 "uuid": "3f98539d-f788-43ca-8884-e42ee0bfb802", 00:11:57.278 "strip_size_kb": 0, 00:11:57.278 "state": "online", 00:11:57.278 "raid_level": "raid1", 00:11:57.278 "superblock": false, 00:11:57.278 "num_base_bdevs": 2, 00:11:57.278 "num_base_bdevs_discovered": 2, 00:11:57.278 "num_base_bdevs_operational": 2, 00:11:57.278 "base_bdevs_list": [ 00:11:57.278 { 00:11:57.278 "name": "spare", 00:11:57.278 "uuid": "2fa50680-7fe9-5694-97f2-9db14f406be4", 00:11:57.278 "is_configured": true, 00:11:57.278 "data_offset": 0, 00:11:57.278 "data_size": 65536 00:11:57.278 }, 00:11:57.278 { 00:11:57.278 "name": "BaseBdev2", 00:11:57.278 "uuid": "94694413-606f-5009-950d-8bcffe005fee", 00:11:57.278 "is_configured": true, 00:11:57.278 "data_offset": 0, 00:11:57.278 "data_size": 65536 00:11:57.278 } 00:11:57.278 ] 00:11:57.278 }' 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.278 03:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.855 [2024-11-20 03:18:47.232215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.855 [2024-11-20 03:18:47.232315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.855 [2024-11-20 03:18:47.232428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.855 [2024-11-20 03:18:47.232513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.855 [2024-11-20 03:18:47.232548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.855 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:58.138 /dev/nbd0 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.138 1+0 records in 00:11:58.138 1+0 records out 00:11:58.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509537 s, 8.0 MB/s 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.138 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:58.138 /dev/nbd1 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.414 1+0 records in 00:11:58.414 1+0 records out 00:11:58.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318674 s, 12.9 MB/s 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.414 03:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.673 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.933 03:18:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75145 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75145 ']' 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75145 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75145 00:11:58.934 killing process with pid 75145 00:11:58.934 Received shutdown signal, test time was about 60.000000 seconds 00:11:58.934 00:11:58.934 Latency(us) 00:11:58.934 [2024-11-20T03:18:48.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.934 [2024-11-20T03:18:48.569Z] =================================================================================================================== 00:11:58.934 [2024-11-20T03:18:48.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75145' 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75145 00:11:58.934 [2024-11-20 03:18:48.493831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.934 03:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75145 00:11:59.194 [2024-11-20 03:18:48.800312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:00.578 00:12:00.578 real 0m15.472s 00:12:00.578 user 0m17.362s 00:12:00.578 sys 0m3.009s 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.578 ************************************ 00:12:00.578 END TEST raid_rebuild_test 00:12:00.578 ************************************ 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.578 03:18:49 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:00.578 03:18:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:00.578 03:18:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.578 03:18:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.578 ************************************ 00:12:00.578 START TEST raid_rebuild_test_sb 00:12:00.578 ************************************ 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75564 00:12:00.578 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75564 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75564 ']' 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.579 03:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.579 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:00.579 Zero copy mechanism will not be used. 00:12:00.579 [2024-11-20 03:18:50.081057] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:00.579 [2024-11-20 03:18:50.081200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75564 ] 00:12:00.838 [2024-11-20 03:18:50.257694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.838 [2024-11-20 03:18:50.372231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.098 [2024-11-20 03:18:50.583579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.098 [2024-11-20 03:18:50.583624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.358 BaseBdev1_malloc 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.358 [2024-11-20 03:18:50.972672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:01.358 [2024-11-20 03:18:50.972813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.358 [2024-11-20 03:18:50.972863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:01.358 [2024-11-20 03:18:50.972904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.358 [2024-11-20 03:18:50.975097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.358 [2024-11-20 03:18:50.975190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.358 BaseBdev1 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.358 03:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 BaseBdev2_malloc 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 [2024-11-20 03:18:51.029605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:01.619 [2024-11-20 03:18:51.029745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.619 [2024-11-20 03:18:51.029784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:01.619 [2024-11-20 03:18:51.029817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.619 [2024-11-20 03:18:51.032062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.619 [2024-11-20 03:18:51.032156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.619 BaseBdev2 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 spare_malloc 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 spare_delay 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 [2024-11-20 03:18:51.105482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.619 [2024-11-20 03:18:51.105552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.619 [2024-11-20 03:18:51.105594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:01.619 [2024-11-20 03:18:51.105606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.619 [2024-11-20 03:18:51.108051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.619 [2024-11-20 03:18:51.108096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.619 spare 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 [2024-11-20 03:18:51.113540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.619 [2024-11-20 03:18:51.115625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.619 [2024-11-20 03:18:51.115847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:01.619 [2024-11-20 03:18:51.115872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.619 [2024-11-20 03:18:51.116159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:01.619 [2024-11-20 03:18:51.116345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:01.619 [2024-11-20 03:18:51.116355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:01.619 [2024-11-20 03:18:51.116530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.619 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.619 "name": "raid_bdev1", 00:12:01.619 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:01.619 "strip_size_kb": 0, 00:12:01.620 "state": "online", 00:12:01.620 "raid_level": "raid1", 00:12:01.620 "superblock": true, 00:12:01.620 "num_base_bdevs": 2, 00:12:01.620 "num_base_bdevs_discovered": 2, 00:12:01.620 "num_base_bdevs_operational": 2, 00:12:01.620 "base_bdevs_list": [ 00:12:01.620 { 00:12:01.620 "name": "BaseBdev1", 00:12:01.620 "uuid": "70e1c405-7d25-52b1-b5be-e8ac673b3dd7", 00:12:01.620 "is_configured": true, 00:12:01.620 "data_offset": 2048, 00:12:01.620 "data_size": 63488 00:12:01.620 }, 00:12:01.620 { 00:12:01.620 "name": "BaseBdev2", 00:12:01.620 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:01.620 "is_configured": true, 00:12:01.620 "data_offset": 2048, 00:12:01.620 "data_size": 63488 00:12:01.620 } 00:12:01.620 ] 00:12:01.620 }' 00:12:01.620 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.620 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:02.189 [2024-11-20 03:18:51.569114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.189 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:02.449 [2024-11-20 03:18:51.852331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:02.449 /dev/nbd0 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.449 1+0 records in 00:12:02.449 1+0 records out 00:12:02.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490666 s, 8.3 MB/s 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:02.449 03:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:06.649 63488+0 records in 00:12:06.649 63488+0 records out 00:12:06.649 32505856 bytes (33 MB, 31 MiB) copied, 4.06645 s, 8.0 MB/s 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.649 03:18:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:06.649 [2024-11-20 03:18:56.214018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.649 [2024-11-20 03:18:56.232673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.649 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.910 "name": "raid_bdev1", 00:12:06.910 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:06.910 "strip_size_kb": 0, 00:12:06.910 "state": "online", 00:12:06.910 "raid_level": "raid1", 00:12:06.910 "superblock": true, 00:12:06.910 "num_base_bdevs": 2, 00:12:06.910 "num_base_bdevs_discovered": 1, 00:12:06.910 "num_base_bdevs_operational": 1, 00:12:06.910 "base_bdevs_list": [ 00:12:06.910 { 00:12:06.910 "name": null, 00:12:06.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.910 "is_configured": false, 00:12:06.910 "data_offset": 0, 00:12:06.910 "data_size": 63488 00:12:06.910 }, 00:12:06.910 { 00:12:06.910 "name": "BaseBdev2", 00:12:06.910 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:06.910 "is_configured": true, 00:12:06.910 "data_offset": 2048, 00:12:06.910 "data_size": 63488 00:12:06.910 } 00:12:06.910 ] 00:12:06.910 }' 00:12:06.910 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.910 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.170 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.170 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.170 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.170 [2024-11-20 03:18:56.663935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.170 [2024-11-20 03:18:56.680322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:07.170 03:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.170 [2024-11-20 03:18:56.682321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:07.170 03:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.109 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.110 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.110 "name": "raid_bdev1", 00:12:08.110 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:08.110 "strip_size_kb": 0, 00:12:08.110 "state": "online", 00:12:08.110 "raid_level": "raid1", 00:12:08.110 "superblock": true, 00:12:08.110 "num_base_bdevs": 2, 00:12:08.110 "num_base_bdevs_discovered": 2, 00:12:08.110 "num_base_bdevs_operational": 2, 00:12:08.110 "process": { 00:12:08.110 "type": "rebuild", 00:12:08.110 "target": "spare", 00:12:08.110 "progress": { 00:12:08.110 "blocks": 20480, 00:12:08.110 "percent": 32 00:12:08.110 } 00:12:08.110 }, 00:12:08.110 "base_bdevs_list": [ 00:12:08.110 { 00:12:08.110 "name": "spare", 00:12:08.110 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:08.110 "is_configured": true, 00:12:08.110 "data_offset": 2048, 00:12:08.110 "data_size": 63488 00:12:08.110 }, 00:12:08.110 { 00:12:08.110 "name": "BaseBdev2", 00:12:08.110 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:08.110 "is_configured": true, 00:12:08.110 "data_offset": 2048, 00:12:08.110 "data_size": 63488 00:12:08.110 } 00:12:08.110 ] 00:12:08.110 }' 00:12:08.110 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.370 [2024-11-20 03:18:57.821607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.370 [2024-11-20 03:18:57.887909] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.370 [2024-11-20 03:18:57.888090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.370 [2024-11-20 03:18:57.888109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.370 [2024-11-20 03:18:57.888120] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.370 "name": "raid_bdev1", 00:12:08.370 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:08.370 "strip_size_kb": 0, 00:12:08.370 "state": "online", 00:12:08.370 "raid_level": "raid1", 00:12:08.370 "superblock": true, 00:12:08.370 "num_base_bdevs": 2, 00:12:08.370 "num_base_bdevs_discovered": 1, 00:12:08.370 "num_base_bdevs_operational": 1, 00:12:08.370 "base_bdevs_list": [ 00:12:08.370 { 00:12:08.370 "name": null, 00:12:08.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.370 "is_configured": false, 00:12:08.370 "data_offset": 0, 00:12:08.370 "data_size": 63488 00:12:08.370 }, 00:12:08.370 { 00:12:08.370 "name": "BaseBdev2", 00:12:08.370 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:08.370 "is_configured": true, 00:12:08.370 "data_offset": 2048, 00:12:08.370 "data_size": 63488 00:12:08.370 } 00:12:08.370 ] 00:12:08.370 }' 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.370 03:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.940 "name": "raid_bdev1", 00:12:08.940 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:08.940 "strip_size_kb": 0, 00:12:08.940 "state": "online", 00:12:08.940 "raid_level": "raid1", 00:12:08.940 "superblock": true, 00:12:08.940 "num_base_bdevs": 2, 00:12:08.940 "num_base_bdevs_discovered": 1, 00:12:08.940 "num_base_bdevs_operational": 1, 00:12:08.940 "base_bdevs_list": [ 00:12:08.940 { 00:12:08.940 "name": null, 00:12:08.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.940 "is_configured": false, 00:12:08.940 "data_offset": 0, 00:12:08.940 "data_size": 63488 00:12:08.940 }, 00:12:08.940 { 00:12:08.940 "name": "BaseBdev2", 00:12:08.940 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:08.940 "is_configured": true, 00:12:08.940 "data_offset": 2048, 00:12:08.940 "data_size": 63488 00:12:08.940 } 00:12:08.940 ] 00:12:08.940 }' 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.940 [2024-11-20 03:18:58.510909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.940 [2024-11-20 03:18:58.527903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.940 03:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:08.940 [2024-11-20 03:18:58.529761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.320 "name": "raid_bdev1", 00:12:10.320 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:10.320 "strip_size_kb": 0, 00:12:10.320 "state": "online", 00:12:10.320 "raid_level": "raid1", 00:12:10.320 "superblock": true, 00:12:10.320 "num_base_bdevs": 2, 00:12:10.320 "num_base_bdevs_discovered": 2, 00:12:10.320 "num_base_bdevs_operational": 2, 00:12:10.320 "process": { 00:12:10.320 "type": "rebuild", 00:12:10.320 "target": "spare", 00:12:10.320 "progress": { 00:12:10.320 "blocks": 20480, 00:12:10.320 "percent": 32 00:12:10.320 } 00:12:10.320 }, 00:12:10.320 "base_bdevs_list": [ 00:12:10.320 { 00:12:10.320 "name": "spare", 00:12:10.320 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:10.320 "is_configured": true, 00:12:10.320 "data_offset": 2048, 00:12:10.320 "data_size": 63488 00:12:10.320 }, 00:12:10.320 { 00:12:10.320 "name": "BaseBdev2", 00:12:10.320 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:10.320 "is_configured": true, 00:12:10.320 "data_offset": 2048, 00:12:10.320 "data_size": 63488 00:12:10.320 } 00:12:10.320 ] 00:12:10.320 }' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:10.320 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.320 "name": "raid_bdev1", 00:12:10.320 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:10.320 "strip_size_kb": 0, 00:12:10.320 "state": "online", 00:12:10.320 "raid_level": "raid1", 00:12:10.320 "superblock": true, 00:12:10.320 "num_base_bdevs": 2, 00:12:10.320 "num_base_bdevs_discovered": 2, 00:12:10.320 "num_base_bdevs_operational": 2, 00:12:10.320 "process": { 00:12:10.320 "type": "rebuild", 00:12:10.320 "target": "spare", 00:12:10.320 "progress": { 00:12:10.320 "blocks": 22528, 00:12:10.320 "percent": 35 00:12:10.320 } 00:12:10.320 }, 00:12:10.320 "base_bdevs_list": [ 00:12:10.320 { 00:12:10.320 "name": "spare", 00:12:10.320 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:10.320 "is_configured": true, 00:12:10.320 "data_offset": 2048, 00:12:10.320 "data_size": 63488 00:12:10.320 }, 00:12:10.320 { 00:12:10.320 "name": "BaseBdev2", 00:12:10.320 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:10.320 "is_configured": true, 00:12:10.320 "data_offset": 2048, 00:12:10.320 "data_size": 63488 00:12:10.320 } 00:12:10.320 ] 00:12:10.320 }' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.320 03:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.293 "name": "raid_bdev1", 00:12:11.293 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:11.293 "strip_size_kb": 0, 00:12:11.293 "state": "online", 00:12:11.293 "raid_level": "raid1", 00:12:11.293 "superblock": true, 00:12:11.293 "num_base_bdevs": 2, 00:12:11.293 "num_base_bdevs_discovered": 2, 00:12:11.293 "num_base_bdevs_operational": 2, 00:12:11.293 "process": { 00:12:11.293 "type": "rebuild", 00:12:11.293 "target": "spare", 00:12:11.293 "progress": { 00:12:11.293 "blocks": 45056, 00:12:11.293 "percent": 70 00:12:11.293 } 00:12:11.293 }, 00:12:11.293 "base_bdevs_list": [ 00:12:11.293 { 00:12:11.293 "name": "spare", 00:12:11.293 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:11.293 "is_configured": true, 00:12:11.293 "data_offset": 2048, 00:12:11.293 "data_size": 63488 00:12:11.293 }, 00:12:11.293 { 00:12:11.293 "name": "BaseBdev2", 00:12:11.293 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:11.293 "is_configured": true, 00:12:11.293 "data_offset": 2048, 00:12:11.293 "data_size": 63488 00:12:11.293 } 00:12:11.293 ] 00:12:11.293 }' 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.293 03:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.231 [2024-11-20 03:19:01.643687] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:12.231 [2024-11-20 03:19:01.643886] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:12.231 [2024-11-20 03:19:01.644013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.490 "name": "raid_bdev1", 00:12:12.490 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:12.490 "strip_size_kb": 0, 00:12:12.490 "state": "online", 00:12:12.490 "raid_level": "raid1", 00:12:12.490 "superblock": true, 00:12:12.490 "num_base_bdevs": 2, 00:12:12.490 "num_base_bdevs_discovered": 2, 00:12:12.490 "num_base_bdevs_operational": 2, 00:12:12.490 "base_bdevs_list": [ 00:12:12.490 { 00:12:12.490 "name": "spare", 00:12:12.490 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:12.490 "is_configured": true, 00:12:12.490 "data_offset": 2048, 00:12:12.490 "data_size": 63488 00:12:12.490 }, 00:12:12.490 { 00:12:12.490 "name": "BaseBdev2", 00:12:12.490 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:12.490 "is_configured": true, 00:12:12.490 "data_offset": 2048, 00:12:12.490 "data_size": 63488 00:12:12.490 } 00:12:12.490 ] 00:12:12.490 }' 00:12:12.490 03:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.490 "name": "raid_bdev1", 00:12:12.490 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:12.490 "strip_size_kb": 0, 00:12:12.490 "state": "online", 00:12:12.490 "raid_level": "raid1", 00:12:12.490 "superblock": true, 00:12:12.490 "num_base_bdevs": 2, 00:12:12.490 "num_base_bdevs_discovered": 2, 00:12:12.490 "num_base_bdevs_operational": 2, 00:12:12.490 "base_bdevs_list": [ 00:12:12.490 { 00:12:12.490 "name": "spare", 00:12:12.490 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:12.490 "is_configured": true, 00:12:12.490 "data_offset": 2048, 00:12:12.490 "data_size": 63488 00:12:12.490 }, 00:12:12.490 { 00:12:12.490 "name": "BaseBdev2", 00:12:12.490 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:12.490 "is_configured": true, 00:12:12.490 "data_offset": 2048, 00:12:12.490 "data_size": 63488 00:12:12.490 } 00:12:12.490 ] 00:12:12.490 }' 00:12:12.490 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.749 "name": "raid_bdev1", 00:12:12.749 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:12.749 "strip_size_kb": 0, 00:12:12.749 "state": "online", 00:12:12.749 "raid_level": "raid1", 00:12:12.749 "superblock": true, 00:12:12.749 "num_base_bdevs": 2, 00:12:12.749 "num_base_bdevs_discovered": 2, 00:12:12.749 "num_base_bdevs_operational": 2, 00:12:12.749 "base_bdevs_list": [ 00:12:12.749 { 00:12:12.749 "name": "spare", 00:12:12.749 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:12.749 "is_configured": true, 00:12:12.749 "data_offset": 2048, 00:12:12.749 "data_size": 63488 00:12:12.749 }, 00:12:12.749 { 00:12:12.749 "name": "BaseBdev2", 00:12:12.749 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:12.749 "is_configured": true, 00:12:12.749 "data_offset": 2048, 00:12:12.749 "data_size": 63488 00:12:12.749 } 00:12:12.749 ] 00:12:12.749 }' 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.749 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 [2024-11-20 03:19:02.610598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.009 [2024-11-20 03:19:02.610706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.009 [2024-11-20 03:19:02.610818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.009 [2024-11-20 03:19:02.610909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.009 [2024-11-20 03:19:02.610957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:13.009 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.269 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:13.529 /dev/nbd0 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.529 1+0 records in 00:12:13.529 1+0 records out 00:12:13.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003069 s, 13.3 MB/s 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.529 03:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:13.790 /dev/nbd1 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.790 1+0 records in 00:12:13.790 1+0 records out 00:12:13.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406116 s, 10.1 MB/s 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.790 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.050 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.310 [2024-11-20 03:19:03.883815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:14.310 [2024-11-20 03:19:03.883891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.310 [2024-11-20 03:19:03.883916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:14.310 [2024-11-20 03:19:03.883925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.310 [2024-11-20 03:19:03.886110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.310 [2024-11-20 03:19:03.886215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:14.310 [2024-11-20 03:19:03.886329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:14.310 [2024-11-20 03:19:03.886395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.310 [2024-11-20 03:19:03.886586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.310 spare 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.310 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.570 [2024-11-20 03:19:03.986518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:14.570 [2024-11-20 03:19:03.986575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.570 [2024-11-20 03:19:03.986932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:14.570 [2024-11-20 03:19:03.987129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:14.570 [2024-11-20 03:19:03.987140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:14.570 [2024-11-20 03:19:03.987334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.570 03:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.570 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.570 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.570 "name": "raid_bdev1", 00:12:14.570 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:14.570 "strip_size_kb": 0, 00:12:14.570 "state": "online", 00:12:14.570 "raid_level": "raid1", 00:12:14.570 "superblock": true, 00:12:14.570 "num_base_bdevs": 2, 00:12:14.570 "num_base_bdevs_discovered": 2, 00:12:14.570 "num_base_bdevs_operational": 2, 00:12:14.570 "base_bdevs_list": [ 00:12:14.570 { 00:12:14.570 "name": "spare", 00:12:14.570 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:14.570 "is_configured": true, 00:12:14.570 "data_offset": 2048, 00:12:14.570 "data_size": 63488 00:12:14.570 }, 00:12:14.570 { 00:12:14.570 "name": "BaseBdev2", 00:12:14.570 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:14.570 "is_configured": true, 00:12:14.570 "data_offset": 2048, 00:12:14.570 "data_size": 63488 00:12:14.570 } 00:12:14.570 ] 00:12:14.570 }' 00:12:14.570 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.570 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.830 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.090 "name": "raid_bdev1", 00:12:15.090 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:15.090 "strip_size_kb": 0, 00:12:15.090 "state": "online", 00:12:15.090 "raid_level": "raid1", 00:12:15.090 "superblock": true, 00:12:15.090 "num_base_bdevs": 2, 00:12:15.090 "num_base_bdevs_discovered": 2, 00:12:15.090 "num_base_bdevs_operational": 2, 00:12:15.090 "base_bdevs_list": [ 00:12:15.090 { 00:12:15.090 "name": "spare", 00:12:15.090 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:15.090 "is_configured": true, 00:12:15.090 "data_offset": 2048, 00:12:15.090 "data_size": 63488 00:12:15.090 }, 00:12:15.090 { 00:12:15.090 "name": "BaseBdev2", 00:12:15.090 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:15.090 "is_configured": true, 00:12:15.090 "data_offset": 2048, 00:12:15.090 "data_size": 63488 00:12:15.090 } 00:12:15.090 ] 00:12:15.090 }' 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.090 [2024-11-20 03:19:04.610639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.090 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.091 "name": "raid_bdev1", 00:12:15.091 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:15.091 "strip_size_kb": 0, 00:12:15.091 "state": "online", 00:12:15.091 "raid_level": "raid1", 00:12:15.091 "superblock": true, 00:12:15.091 "num_base_bdevs": 2, 00:12:15.091 "num_base_bdevs_discovered": 1, 00:12:15.091 "num_base_bdevs_operational": 1, 00:12:15.091 "base_bdevs_list": [ 00:12:15.091 { 00:12:15.091 "name": null, 00:12:15.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.091 "is_configured": false, 00:12:15.091 "data_offset": 0, 00:12:15.091 "data_size": 63488 00:12:15.091 }, 00:12:15.091 { 00:12:15.091 "name": "BaseBdev2", 00:12:15.091 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:15.091 "is_configured": true, 00:12:15.091 "data_offset": 2048, 00:12:15.091 "data_size": 63488 00:12:15.091 } 00:12:15.091 ] 00:12:15.091 }' 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.091 03:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.660 03:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:15.660 03:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.660 03:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.660 [2024-11-20 03:19:05.021982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.661 [2024-11-20 03:19:05.022187] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:15.661 [2024-11-20 03:19:05.022206] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:15.661 [2024-11-20 03:19:05.022243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.661 [2024-11-20 03:19:05.038287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:15.661 03:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.661 03:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:15.661 [2024-11-20 03:19:05.040238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.600 "name": "raid_bdev1", 00:12:16.600 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:16.600 "strip_size_kb": 0, 00:12:16.600 "state": "online", 00:12:16.600 "raid_level": "raid1", 00:12:16.600 "superblock": true, 00:12:16.600 "num_base_bdevs": 2, 00:12:16.600 "num_base_bdevs_discovered": 2, 00:12:16.600 "num_base_bdevs_operational": 2, 00:12:16.600 "process": { 00:12:16.600 "type": "rebuild", 00:12:16.600 "target": "spare", 00:12:16.600 "progress": { 00:12:16.600 "blocks": 20480, 00:12:16.600 "percent": 32 00:12:16.600 } 00:12:16.600 }, 00:12:16.600 "base_bdevs_list": [ 00:12:16.600 { 00:12:16.600 "name": "spare", 00:12:16.600 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:16.600 "is_configured": true, 00:12:16.600 "data_offset": 2048, 00:12:16.600 "data_size": 63488 00:12:16.600 }, 00:12:16.600 { 00:12:16.600 "name": "BaseBdev2", 00:12:16.600 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:16.600 "is_configured": true, 00:12:16.600 "data_offset": 2048, 00:12:16.600 "data_size": 63488 00:12:16.600 } 00:12:16.600 ] 00:12:16.600 }' 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.600 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.600 [2024-11-20 03:19:06.199547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.860 [2024-11-20 03:19:06.245685] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:16.861 [2024-11-20 03:19:06.245755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.861 [2024-11-20 03:19:06.245772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.861 [2024-11-20 03:19:06.245781] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.861 "name": "raid_bdev1", 00:12:16.861 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:16.861 "strip_size_kb": 0, 00:12:16.861 "state": "online", 00:12:16.861 "raid_level": "raid1", 00:12:16.861 "superblock": true, 00:12:16.861 "num_base_bdevs": 2, 00:12:16.861 "num_base_bdevs_discovered": 1, 00:12:16.861 "num_base_bdevs_operational": 1, 00:12:16.861 "base_bdevs_list": [ 00:12:16.861 { 00:12:16.861 "name": null, 00:12:16.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.861 "is_configured": false, 00:12:16.861 "data_offset": 0, 00:12:16.861 "data_size": 63488 00:12:16.861 }, 00:12:16.861 { 00:12:16.861 "name": "BaseBdev2", 00:12:16.861 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:16.861 "is_configured": true, 00:12:16.861 "data_offset": 2048, 00:12:16.861 "data_size": 63488 00:12:16.861 } 00:12:16.861 ] 00:12:16.861 }' 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.861 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.121 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:17.121 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.121 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.121 [2024-11-20 03:19:06.747685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:17.121 [2024-11-20 03:19:06.747821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.121 [2024-11-20 03:19:06.747862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:17.121 [2024-11-20 03:19:06.747894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.121 [2024-11-20 03:19:06.748392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.121 [2024-11-20 03:19:06.748461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:17.121 [2024-11-20 03:19:06.748593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:17.121 [2024-11-20 03:19:06.748652] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:17.121 [2024-11-20 03:19:06.748736] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:17.121 [2024-11-20 03:19:06.748791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.380 [2024-11-20 03:19:06.765161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:17.380 spare 00:12:17.380 03:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.380 [2024-11-20 03:19:06.767156] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:17.380 03:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.317 "name": "raid_bdev1", 00:12:18.317 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:18.317 "strip_size_kb": 0, 00:12:18.317 "state": "online", 00:12:18.317 "raid_level": "raid1", 00:12:18.317 "superblock": true, 00:12:18.317 "num_base_bdevs": 2, 00:12:18.317 "num_base_bdevs_discovered": 2, 00:12:18.317 "num_base_bdevs_operational": 2, 00:12:18.317 "process": { 00:12:18.317 "type": "rebuild", 00:12:18.317 "target": "spare", 00:12:18.317 "progress": { 00:12:18.317 "blocks": 20480, 00:12:18.317 "percent": 32 00:12:18.317 } 00:12:18.317 }, 00:12:18.317 "base_bdevs_list": [ 00:12:18.317 { 00:12:18.317 "name": "spare", 00:12:18.317 "uuid": "111ce87b-088c-577b-9cb6-9d8fecc688c1", 00:12:18.317 "is_configured": true, 00:12:18.317 "data_offset": 2048, 00:12:18.317 "data_size": 63488 00:12:18.317 }, 00:12:18.317 { 00:12:18.317 "name": "BaseBdev2", 00:12:18.317 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:18.317 "is_configured": true, 00:12:18.317 "data_offset": 2048, 00:12:18.317 "data_size": 63488 00:12:18.317 } 00:12:18.317 ] 00:12:18.317 }' 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.317 03:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.317 [2024-11-20 03:19:07.922607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.576 [2024-11-20 03:19:07.972711] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:18.576 [2024-11-20 03:19:07.972780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.576 [2024-11-20 03:19:07.972798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.576 [2024-11-20 03:19:07.972806] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.576 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.576 "name": "raid_bdev1", 00:12:18.576 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:18.576 "strip_size_kb": 0, 00:12:18.576 "state": "online", 00:12:18.576 "raid_level": "raid1", 00:12:18.576 "superblock": true, 00:12:18.576 "num_base_bdevs": 2, 00:12:18.576 "num_base_bdevs_discovered": 1, 00:12:18.576 "num_base_bdevs_operational": 1, 00:12:18.576 "base_bdevs_list": [ 00:12:18.576 { 00:12:18.576 "name": null, 00:12:18.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.576 "is_configured": false, 00:12:18.577 "data_offset": 0, 00:12:18.577 "data_size": 63488 00:12:18.577 }, 00:12:18.577 { 00:12:18.577 "name": "BaseBdev2", 00:12:18.577 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:18.577 "is_configured": true, 00:12:18.577 "data_offset": 2048, 00:12:18.577 "data_size": 63488 00:12:18.577 } 00:12:18.577 ] 00:12:18.577 }' 00:12:18.577 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.577 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.835 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.095 "name": "raid_bdev1", 00:12:19.095 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:19.095 "strip_size_kb": 0, 00:12:19.095 "state": "online", 00:12:19.095 "raid_level": "raid1", 00:12:19.095 "superblock": true, 00:12:19.095 "num_base_bdevs": 2, 00:12:19.095 "num_base_bdevs_discovered": 1, 00:12:19.095 "num_base_bdevs_operational": 1, 00:12:19.095 "base_bdevs_list": [ 00:12:19.095 { 00:12:19.095 "name": null, 00:12:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.095 "is_configured": false, 00:12:19.095 "data_offset": 0, 00:12:19.095 "data_size": 63488 00:12:19.095 }, 00:12:19.095 { 00:12:19.095 "name": "BaseBdev2", 00:12:19.095 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:19.095 "is_configured": true, 00:12:19.095 "data_offset": 2048, 00:12:19.095 "data_size": 63488 00:12:19.095 } 00:12:19.095 ] 00:12:19.095 }' 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.095 [2024-11-20 03:19:08.607020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.095 [2024-11-20 03:19:08.607081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.095 [2024-11-20 03:19:08.607103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:19.095 [2024-11-20 03:19:08.607121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.095 [2024-11-20 03:19:08.607563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.095 [2024-11-20 03:19:08.607580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.095 [2024-11-20 03:19:08.607682] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:19.095 [2024-11-20 03:19:08.607700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:19.095 [2024-11-20 03:19:08.607709] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:19.095 [2024-11-20 03:19:08.607719] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:19.095 BaseBdev1 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.095 03:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:20.032 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.032 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.033 03:19:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.293 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.293 "name": "raid_bdev1", 00:12:20.293 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:20.293 "strip_size_kb": 0, 00:12:20.293 "state": "online", 00:12:20.293 "raid_level": "raid1", 00:12:20.293 "superblock": true, 00:12:20.293 "num_base_bdevs": 2, 00:12:20.293 "num_base_bdevs_discovered": 1, 00:12:20.293 "num_base_bdevs_operational": 1, 00:12:20.293 "base_bdevs_list": [ 00:12:20.293 { 00:12:20.293 "name": null, 00:12:20.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.293 "is_configured": false, 00:12:20.293 "data_offset": 0, 00:12:20.293 "data_size": 63488 00:12:20.293 }, 00:12:20.293 { 00:12:20.293 "name": "BaseBdev2", 00:12:20.293 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:20.293 "is_configured": true, 00:12:20.293 "data_offset": 2048, 00:12:20.293 "data_size": 63488 00:12:20.293 } 00:12:20.293 ] 00:12:20.293 }' 00:12:20.293 03:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.293 03:19:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.553 "name": "raid_bdev1", 00:12:20.553 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:20.553 "strip_size_kb": 0, 00:12:20.553 "state": "online", 00:12:20.553 "raid_level": "raid1", 00:12:20.553 "superblock": true, 00:12:20.553 "num_base_bdevs": 2, 00:12:20.553 "num_base_bdevs_discovered": 1, 00:12:20.553 "num_base_bdevs_operational": 1, 00:12:20.553 "base_bdevs_list": [ 00:12:20.553 { 00:12:20.553 "name": null, 00:12:20.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.553 "is_configured": false, 00:12:20.553 "data_offset": 0, 00:12:20.553 "data_size": 63488 00:12:20.553 }, 00:12:20.553 { 00:12:20.553 "name": "BaseBdev2", 00:12:20.553 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:20.553 "is_configured": true, 00:12:20.553 "data_offset": 2048, 00:12:20.553 "data_size": 63488 00:12:20.553 } 00:12:20.553 ] 00:12:20.553 }' 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.553 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.812 [2024-11-20 03:19:10.224368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.812 [2024-11-20 03:19:10.224534] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:20.812 [2024-11-20 03:19:10.224550] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:20.812 request: 00:12:20.812 { 00:12:20.812 "base_bdev": "BaseBdev1", 00:12:20.812 "raid_bdev": "raid_bdev1", 00:12:20.812 "method": "bdev_raid_add_base_bdev", 00:12:20.812 "req_id": 1 00:12:20.812 } 00:12:20.812 Got JSON-RPC error response 00:12:20.812 response: 00:12:20.812 { 00:12:20.812 "code": -22, 00:12:20.812 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:20.812 } 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.812 03:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.751 "name": "raid_bdev1", 00:12:21.751 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:21.751 "strip_size_kb": 0, 00:12:21.751 "state": "online", 00:12:21.751 "raid_level": "raid1", 00:12:21.751 "superblock": true, 00:12:21.751 "num_base_bdevs": 2, 00:12:21.751 "num_base_bdevs_discovered": 1, 00:12:21.751 "num_base_bdevs_operational": 1, 00:12:21.751 "base_bdevs_list": [ 00:12:21.751 { 00:12:21.751 "name": null, 00:12:21.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.751 "is_configured": false, 00:12:21.751 "data_offset": 0, 00:12:21.751 "data_size": 63488 00:12:21.751 }, 00:12:21.751 { 00:12:21.751 "name": "BaseBdev2", 00:12:21.751 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:21.751 "is_configured": true, 00:12:21.751 "data_offset": 2048, 00:12:21.751 "data_size": 63488 00:12:21.751 } 00:12:21.751 ] 00:12:21.751 }' 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.751 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.320 "name": "raid_bdev1", 00:12:22.320 "uuid": "328ced3e-1774-4f69-9a5f-d470190246b9", 00:12:22.320 "strip_size_kb": 0, 00:12:22.320 "state": "online", 00:12:22.320 "raid_level": "raid1", 00:12:22.320 "superblock": true, 00:12:22.320 "num_base_bdevs": 2, 00:12:22.320 "num_base_bdevs_discovered": 1, 00:12:22.320 "num_base_bdevs_operational": 1, 00:12:22.320 "base_bdevs_list": [ 00:12:22.320 { 00:12:22.320 "name": null, 00:12:22.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.320 "is_configured": false, 00:12:22.320 "data_offset": 0, 00:12:22.320 "data_size": 63488 00:12:22.320 }, 00:12:22.320 { 00:12:22.320 "name": "BaseBdev2", 00:12:22.320 "uuid": "cd1cfb4d-6214-518e-8fa3-36e2a012c462", 00:12:22.320 "is_configured": true, 00:12:22.320 "data_offset": 2048, 00:12:22.320 "data_size": 63488 00:12:22.320 } 00:12:22.320 ] 00:12:22.320 }' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75564 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75564 ']' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75564 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75564 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.320 killing process with pid 75564 00:12:22.320 Received shutdown signal, test time was about 60.000000 seconds 00:12:22.320 00:12:22.320 Latency(us) 00:12:22.320 [2024-11-20T03:19:11.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.320 [2024-11-20T03:19:11.955Z] =================================================================================================================== 00:12:22.320 [2024-11-20T03:19:11.955Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75564' 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75564 00:12:22.320 [2024-11-20 03:19:11.918962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.320 [2024-11-20 03:19:11.919097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.320 03:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75564 00:12:22.320 [2024-11-20 03:19:11.919149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.320 [2024-11-20 03:19:11.919161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:22.889 [2024-11-20 03:19:12.232459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:23.828 ************************************ 00:12:23.828 END TEST raid_rebuild_test_sb 00:12:23.828 ************************************ 00:12:23.828 00:12:23.828 real 0m23.375s 00:12:23.828 user 0m28.665s 00:12:23.828 sys 0m3.708s 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.828 03:19:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:23.828 03:19:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:23.828 03:19:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.828 03:19:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.828 ************************************ 00:12:23.828 START TEST raid_rebuild_test_io 00:12:23.828 ************************************ 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76292 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76292 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76292 ']' 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.828 03:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.088 [2024-11-20 03:19:13.521129] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:24.088 [2024-11-20 03:19:13.521333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:24.088 Zero copy mechanism will not be used. 00:12:24.088 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76292 ] 00:12:24.088 [2024-11-20 03:19:13.686721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.347 [2024-11-20 03:19:13.802466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.606 [2024-11-20 03:19:13.993395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.606 [2024-11-20 03:19:13.993431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.866 BaseBdev1_malloc 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.866 [2024-11-20 03:19:14.422715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.866 [2024-11-20 03:19:14.422847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.866 [2024-11-20 03:19:14.422880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:24.866 [2024-11-20 03:19:14.422894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.866 [2024-11-20 03:19:14.425273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.866 [2024-11-20 03:19:14.425313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.866 BaseBdev1 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.866 BaseBdev2_malloc 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.866 [2024-11-20 03:19:14.474252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:24.866 [2024-11-20 03:19:14.474393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.866 [2024-11-20 03:19:14.474426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:24.866 [2024-11-20 03:19:14.474439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.866 [2024-11-20 03:19:14.476758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.866 [2024-11-20 03:19:14.476796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:24.866 BaseBdev2 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.866 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.126 spare_malloc 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.126 spare_delay 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.126 [2024-11-20 03:19:14.544224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.126 [2024-11-20 03:19:14.544401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.126 [2024-11-20 03:19:14.544429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:25.126 [2024-11-20 03:19:14.544442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.126 [2024-11-20 03:19:14.546623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.126 [2024-11-20 03:19:14.546659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.126 spare 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.126 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.127 [2024-11-20 03:19:14.552260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.127 [2024-11-20 03:19:14.554001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.127 [2024-11-20 03:19:14.554081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.127 [2024-11-20 03:19:14.554102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:25.127 [2024-11-20 03:19:14.554340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:25.127 [2024-11-20 03:19:14.554500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.127 [2024-11-20 03:19:14.554511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.127 [2024-11-20 03:19:14.554701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.127 "name": "raid_bdev1", 00:12:25.127 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:25.127 "strip_size_kb": 0, 00:12:25.127 "state": "online", 00:12:25.127 "raid_level": "raid1", 00:12:25.127 "superblock": false, 00:12:25.127 "num_base_bdevs": 2, 00:12:25.127 "num_base_bdevs_discovered": 2, 00:12:25.127 "num_base_bdevs_operational": 2, 00:12:25.127 "base_bdevs_list": [ 00:12:25.127 { 00:12:25.127 "name": "BaseBdev1", 00:12:25.127 "uuid": "3c6f94fc-95d6-503c-ab2f-a720b83d2713", 00:12:25.127 "is_configured": true, 00:12:25.127 "data_offset": 0, 00:12:25.127 "data_size": 65536 00:12:25.127 }, 00:12:25.127 { 00:12:25.127 "name": "BaseBdev2", 00:12:25.127 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:25.127 "is_configured": true, 00:12:25.127 "data_offset": 0, 00:12:25.127 "data_size": 65536 00:12:25.127 } 00:12:25.127 ] 00:12:25.127 }' 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.127 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.386 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.386 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.386 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.386 03:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:25.386 [2024-11-20 03:19:14.983820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.386 03:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.646 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.647 [2024-11-20 03:19:15.087343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.647 "name": "raid_bdev1", 00:12:25.647 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:25.647 "strip_size_kb": 0, 00:12:25.647 "state": "online", 00:12:25.647 "raid_level": "raid1", 00:12:25.647 "superblock": false, 00:12:25.647 "num_base_bdevs": 2, 00:12:25.647 "num_base_bdevs_discovered": 1, 00:12:25.647 "num_base_bdevs_operational": 1, 00:12:25.647 "base_bdevs_list": [ 00:12:25.647 { 00:12:25.647 "name": null, 00:12:25.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.647 "is_configured": false, 00:12:25.647 "data_offset": 0, 00:12:25.647 "data_size": 65536 00:12:25.647 }, 00:12:25.647 { 00:12:25.647 "name": "BaseBdev2", 00:12:25.647 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:25.647 "is_configured": true, 00:12:25.647 "data_offset": 0, 00:12:25.647 "data_size": 65536 00:12:25.647 } 00:12:25.647 ] 00:12:25.647 }' 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.647 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.647 [2024-11-20 03:19:15.191823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:25.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:25.647 Zero copy mechanism will not be used. 00:12:25.647 Running I/O for 60 seconds... 00:12:25.907 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.907 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.907 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.907 [2024-11-20 03:19:15.522281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.166 03:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.166 03:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:26.166 [2024-11-20 03:19:15.583127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:26.166 [2024-11-20 03:19:15.585063] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.166 [2024-11-20 03:19:15.699903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:26.166 [2024-11-20 03:19:15.700485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:26.426 [2024-11-20 03:19:15.915665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:26.426 [2024-11-20 03:19:15.916012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:26.685 [2024-11-20 03:19:16.158454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:26.685 173.00 IOPS, 519.00 MiB/s [2024-11-20T03:19:16.320Z] [2024-11-20 03:19:16.267754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:26.944 [2024-11-20 03:19:16.508436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.944 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.204 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.204 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.204 "name": "raid_bdev1", 00:12:27.204 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:27.204 "strip_size_kb": 0, 00:12:27.204 "state": "online", 00:12:27.204 "raid_level": "raid1", 00:12:27.204 "superblock": false, 00:12:27.204 "num_base_bdevs": 2, 00:12:27.204 "num_base_bdevs_discovered": 2, 00:12:27.204 "num_base_bdevs_operational": 2, 00:12:27.204 "process": { 00:12:27.204 "type": "rebuild", 00:12:27.204 "target": "spare", 00:12:27.204 "progress": { 00:12:27.204 "blocks": 14336, 00:12:27.204 "percent": 21 00:12:27.204 } 00:12:27.204 }, 00:12:27.204 "base_bdevs_list": [ 00:12:27.204 { 00:12:27.204 "name": "spare", 00:12:27.204 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:27.204 "is_configured": true, 00:12:27.204 "data_offset": 0, 00:12:27.204 "data_size": 65536 00:12:27.204 }, 00:12:27.204 { 00:12:27.204 "name": "BaseBdev2", 00:12:27.204 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:27.204 "is_configured": true, 00:12:27.204 "data_offset": 0, 00:12:27.204 "data_size": 65536 00:12:27.204 } 00:12:27.204 ] 00:12:27.204 }' 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.205 [2024-11-20 03:19:16.723605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.205 [2024-11-20 03:19:16.736418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:27.205 [2024-11-20 03:19:16.743711] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.205 [2024-11-20 03:19:16.751885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.205 [2024-11-20 03:19:16.751923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.205 [2024-11-20 03:19:16.751939] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.205 [2024-11-20 03:19:16.789600] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.205 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.464 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.464 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.464 "name": "raid_bdev1", 00:12:27.464 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:27.464 "strip_size_kb": 0, 00:12:27.464 "state": "online", 00:12:27.464 "raid_level": "raid1", 00:12:27.464 "superblock": false, 00:12:27.464 "num_base_bdevs": 2, 00:12:27.464 "num_base_bdevs_discovered": 1, 00:12:27.464 "num_base_bdevs_operational": 1, 00:12:27.464 "base_bdevs_list": [ 00:12:27.464 { 00:12:27.464 "name": null, 00:12:27.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.464 "is_configured": false, 00:12:27.464 "data_offset": 0, 00:12:27.464 "data_size": 65536 00:12:27.464 }, 00:12:27.464 { 00:12:27.464 "name": "BaseBdev2", 00:12:27.464 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:27.464 "is_configured": true, 00:12:27.464 "data_offset": 0, 00:12:27.464 "data_size": 65536 00:12:27.464 } 00:12:27.464 ] 00:12:27.464 }' 00:12:27.464 03:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.464 03:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.723 156.00 IOPS, 468.00 MiB/s [2024-11-20T03:19:17.358Z] 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.723 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.723 "name": "raid_bdev1", 00:12:27.723 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:27.723 "strip_size_kb": 0, 00:12:27.723 "state": "online", 00:12:27.724 "raid_level": "raid1", 00:12:27.724 "superblock": false, 00:12:27.724 "num_base_bdevs": 2, 00:12:27.724 "num_base_bdevs_discovered": 1, 00:12:27.724 "num_base_bdevs_operational": 1, 00:12:27.724 "base_bdevs_list": [ 00:12:27.724 { 00:12:27.724 "name": null, 00:12:27.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.724 "is_configured": false, 00:12:27.724 "data_offset": 0, 00:12:27.724 "data_size": 65536 00:12:27.724 }, 00:12:27.724 { 00:12:27.724 "name": "BaseBdev2", 00:12:27.724 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:27.724 "is_configured": true, 00:12:27.724 "data_offset": 0, 00:12:27.724 "data_size": 65536 00:12:27.724 } 00:12:27.724 ] 00:12:27.724 }' 00:12:27.724 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.724 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.983 [2024-11-20 03:19:17.423097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.983 03:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:27.983 [2024-11-20 03:19:17.474881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:27.983 [2024-11-20 03:19:17.476862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.242 [2024-11-20 03:19:17.617320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:28.242 [2024-11-20 03:19:17.752772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:28.242 [2024-11-20 03:19:17.753234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:28.809 161.00 IOPS, 483.00 MiB/s [2024-11-20T03:19:18.444Z] [2024-11-20 03:19:18.216115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:28.809 [2024-11-20 03:19:18.216514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.068 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.068 "name": "raid_bdev1", 00:12:29.068 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:29.068 "strip_size_kb": 0, 00:12:29.068 "state": "online", 00:12:29.068 "raid_level": "raid1", 00:12:29.068 "superblock": false, 00:12:29.068 "num_base_bdevs": 2, 00:12:29.068 "num_base_bdevs_discovered": 2, 00:12:29.068 "num_base_bdevs_operational": 2, 00:12:29.068 "process": { 00:12:29.068 "type": "rebuild", 00:12:29.068 "target": "spare", 00:12:29.069 "progress": { 00:12:29.069 "blocks": 12288, 00:12:29.069 "percent": 18 00:12:29.069 } 00:12:29.069 }, 00:12:29.069 "base_bdevs_list": [ 00:12:29.069 { 00:12:29.069 "name": "spare", 00:12:29.069 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:29.069 "is_configured": true, 00:12:29.069 "data_offset": 0, 00:12:29.069 "data_size": 65536 00:12:29.069 }, 00:12:29.069 { 00:12:29.069 "name": "BaseBdev2", 00:12:29.069 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:29.069 "is_configured": true, 00:12:29.069 "data_offset": 0, 00:12:29.069 "data_size": 65536 00:12:29.069 } 00:12:29.069 ] 00:12:29.069 }' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.069 [2024-11-20 03:19:18.547255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.069 "name": "raid_bdev1", 00:12:29.069 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:29.069 "strip_size_kb": 0, 00:12:29.069 "state": "online", 00:12:29.069 "raid_level": "raid1", 00:12:29.069 "superblock": false, 00:12:29.069 "num_base_bdevs": 2, 00:12:29.069 "num_base_bdevs_discovered": 2, 00:12:29.069 "num_base_bdevs_operational": 2, 00:12:29.069 "process": { 00:12:29.069 "type": "rebuild", 00:12:29.069 "target": "spare", 00:12:29.069 "progress": { 00:12:29.069 "blocks": 14336, 00:12:29.069 "percent": 21 00:12:29.069 } 00:12:29.069 }, 00:12:29.069 "base_bdevs_list": [ 00:12:29.069 { 00:12:29.069 "name": "spare", 00:12:29.069 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:29.069 "is_configured": true, 00:12:29.069 "data_offset": 0, 00:12:29.069 "data_size": 65536 00:12:29.069 }, 00:12:29.069 { 00:12:29.069 "name": "BaseBdev2", 00:12:29.069 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:29.069 "is_configured": true, 00:12:29.069 "data_offset": 0, 00:12:29.069 "data_size": 65536 00:12:29.069 } 00:12:29.069 ] 00:12:29.069 }' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.069 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.328 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.328 03:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.328 [2024-11-20 03:19:18.898591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:29.328 [2024-11-20 03:19:18.905529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:29.588 [2024-11-20 03:19:19.026020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:30.155 137.00 IOPS, 411.00 MiB/s [2024-11-20T03:19:19.790Z] [2024-11-20 03:19:19.643642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.155 [2024-11-20 03:19:19.757424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.155 "name": "raid_bdev1", 00:12:30.155 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:30.155 "strip_size_kb": 0, 00:12:30.155 "state": "online", 00:12:30.155 "raid_level": "raid1", 00:12:30.155 "superblock": false, 00:12:30.155 "num_base_bdevs": 2, 00:12:30.155 "num_base_bdevs_discovered": 2, 00:12:30.155 "num_base_bdevs_operational": 2, 00:12:30.155 "process": { 00:12:30.155 "type": "rebuild", 00:12:30.155 "target": "spare", 00:12:30.155 "progress": { 00:12:30.155 "blocks": 32768, 00:12:30.155 "percent": 50 00:12:30.155 } 00:12:30.155 }, 00:12:30.155 "base_bdevs_list": [ 00:12:30.155 { 00:12:30.155 "name": "spare", 00:12:30.155 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:30.155 "is_configured": true, 00:12:30.155 "data_offset": 0, 00:12:30.155 "data_size": 65536 00:12:30.155 }, 00:12:30.155 { 00:12:30.155 "name": "BaseBdev2", 00:12:30.155 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:30.155 "is_configured": true, 00:12:30.155 "data_offset": 0, 00:12:30.155 "data_size": 65536 00:12:30.155 } 00:12:30.155 ] 00:12:30.155 }' 00:12:30.155 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.415 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.415 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.415 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.415 03:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.674 119.40 IOPS, 358.20 MiB/s [2024-11-20T03:19:20.309Z] [2024-11-20 03:19:20.205948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:30.674 [2024-11-20 03:19:20.213098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:30.934 [2024-11-20 03:19:20.545868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.504 "name": "raid_bdev1", 00:12:31.504 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:31.504 "strip_size_kb": 0, 00:12:31.504 "state": "online", 00:12:31.504 "raid_level": "raid1", 00:12:31.504 "superblock": false, 00:12:31.504 "num_base_bdevs": 2, 00:12:31.504 "num_base_bdevs_discovered": 2, 00:12:31.504 "num_base_bdevs_operational": 2, 00:12:31.504 "process": { 00:12:31.504 "type": "rebuild", 00:12:31.504 "target": "spare", 00:12:31.504 "progress": { 00:12:31.504 "blocks": 53248, 00:12:31.504 "percent": 81 00:12:31.504 } 00:12:31.504 }, 00:12:31.504 "base_bdevs_list": [ 00:12:31.504 { 00:12:31.504 "name": "spare", 00:12:31.504 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:31.504 "is_configured": true, 00:12:31.504 "data_offset": 0, 00:12:31.504 "data_size": 65536 00:12:31.504 }, 00:12:31.504 { 00:12:31.504 "name": "BaseBdev2", 00:12:31.504 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:31.504 "is_configured": true, 00:12:31.504 "data_offset": 0, 00:12:31.504 "data_size": 65536 00:12:31.504 } 00:12:31.504 ] 00:12:31.504 }' 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.504 03:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.504 03:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.504 03:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:31.764 107.50 IOPS, 322.50 MiB/s [2024-11-20T03:19:21.399Z] [2024-11-20 03:19:21.200522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:32.024 [2024-11-20 03:19:21.530739] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:32.024 [2024-11-20 03:19:21.630565] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:32.024 [2024-11-20 03:19:21.633029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.594 "name": "raid_bdev1", 00:12:32.594 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:32.594 "strip_size_kb": 0, 00:12:32.594 "state": "online", 00:12:32.594 "raid_level": "raid1", 00:12:32.594 "superblock": false, 00:12:32.594 "num_base_bdevs": 2, 00:12:32.594 "num_base_bdevs_discovered": 2, 00:12:32.594 "num_base_bdevs_operational": 2, 00:12:32.594 "base_bdevs_list": [ 00:12:32.594 { 00:12:32.594 "name": "spare", 00:12:32.594 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:32.594 "is_configured": true, 00:12:32.594 "data_offset": 0, 00:12:32.594 "data_size": 65536 00:12:32.594 }, 00:12:32.594 { 00:12:32.594 "name": "BaseBdev2", 00:12:32.594 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:32.594 "is_configured": true, 00:12:32.594 "data_offset": 0, 00:12:32.594 "data_size": 65536 00:12:32.594 } 00:12:32.594 ] 00:12:32.594 }' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.594 98.14 IOPS, 294.43 MiB/s [2024-11-20T03:19:22.229Z] 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.594 "name": "raid_bdev1", 00:12:32.594 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:32.594 "strip_size_kb": 0, 00:12:32.594 "state": "online", 00:12:32.594 "raid_level": "raid1", 00:12:32.594 "superblock": false, 00:12:32.594 "num_base_bdevs": 2, 00:12:32.594 "num_base_bdevs_discovered": 2, 00:12:32.594 "num_base_bdevs_operational": 2, 00:12:32.594 "base_bdevs_list": [ 00:12:32.594 { 00:12:32.594 "name": "spare", 00:12:32.594 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:32.594 "is_configured": true, 00:12:32.594 "data_offset": 0, 00:12:32.594 "data_size": 65536 00:12:32.594 }, 00:12:32.594 { 00:12:32.594 "name": "BaseBdev2", 00:12:32.594 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:32.594 "is_configured": true, 00:12:32.594 "data_offset": 0, 00:12:32.594 "data_size": 65536 00:12:32.594 } 00:12:32.594 ] 00:12:32.594 }' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.594 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.854 "name": "raid_bdev1", 00:12:32.854 "uuid": "9d7a9e5e-f451-4048-a1af-a358be9c5289", 00:12:32.854 "strip_size_kb": 0, 00:12:32.854 "state": "online", 00:12:32.854 "raid_level": "raid1", 00:12:32.854 "superblock": false, 00:12:32.854 "num_base_bdevs": 2, 00:12:32.854 "num_base_bdevs_discovered": 2, 00:12:32.854 "num_base_bdevs_operational": 2, 00:12:32.854 "base_bdevs_list": [ 00:12:32.854 { 00:12:32.854 "name": "spare", 00:12:32.854 "uuid": "b6f0a245-533d-5ed1-890c-f3e732972470", 00:12:32.854 "is_configured": true, 00:12:32.854 "data_offset": 0, 00:12:32.854 "data_size": 65536 00:12:32.854 }, 00:12:32.854 { 00:12:32.854 "name": "BaseBdev2", 00:12:32.854 "uuid": "a49dd240-bdce-5fb2-a26b-9cf4f985d743", 00:12:32.854 "is_configured": true, 00:12:32.854 "data_offset": 0, 00:12:32.854 "data_size": 65536 00:12:32.854 } 00:12:32.854 ] 00:12:32.854 }' 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.854 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.137 [2024-11-20 03:19:22.664119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.137 [2024-11-20 03:19:22.664211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.137 00:12:33.137 Latency(us) 00:12:33.137 [2024-11-20T03:19:22.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.137 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:33.137 raid_bdev1 : 7.53 92.93 278.80 0.00 0.00 15234.63 334.48 109436.53 00:12:33.137 [2024-11-20T03:19:22.772Z] =================================================================================================================== 00:12:33.137 [2024-11-20T03:19:22.772Z] Total : 92.93 278.80 0.00 0.00 15234.63 334.48 109436.53 00:12:33.137 { 00:12:33.137 "results": [ 00:12:33.137 { 00:12:33.137 "job": "raid_bdev1", 00:12:33.137 "core_mask": "0x1", 00:12:33.137 "workload": "randrw", 00:12:33.137 "percentage": 50, 00:12:33.137 "status": "finished", 00:12:33.137 "queue_depth": 2, 00:12:33.137 "io_size": 3145728, 00:12:33.137 "runtime": 7.53235, 00:12:33.137 "iops": 92.93248454997445, 00:12:33.137 "mibps": 278.79745364992334, 00:12:33.137 "io_failed": 0, 00:12:33.137 "io_timeout": 0, 00:12:33.137 "avg_latency_us": 15234.631226450407, 00:12:33.137 "min_latency_us": 334.4768558951965, 00:12:33.137 "max_latency_us": 109436.5344978166 00:12:33.137 } 00:12:33.137 ], 00:12:33.137 "core_count": 1 00:12:33.137 } 00:12:33.137 [2024-11-20 03:19:22.735740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.137 [2024-11-20 03:19:22.735794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.137 [2024-11-20 03:19:22.735878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.137 [2024-11-20 03:19:22.735891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:33.137 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.410 03:19:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:33.410 /dev/nbd0 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.410 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.670 1+0 records in 00:12:33.670 1+0 records out 00:12:33.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259538 s, 15.8 MB/s 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:33.670 /dev/nbd1 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.670 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.670 1+0 records in 00:12:33.670 1+0 records out 00:12:33.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531976 s, 7.7 MB/s 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.930 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.190 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76292 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76292 ']' 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76292 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76292 00:12:34.450 killing process with pid 76292 00:12:34.450 Received shutdown signal, test time was about 8.820427 seconds 00:12:34.450 00:12:34.450 Latency(us) 00:12:34.450 [2024-11-20T03:19:24.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.450 [2024-11-20T03:19:24.085Z] =================================================================================================================== 00:12:34.450 [2024-11-20T03:19:24.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76292' 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76292 00:12:34.450 [2024-11-20 03:19:23.997544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.450 03:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76292 00:12:34.711 [2024-11-20 03:19:24.230445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:36.092 00:12:36.092 real 0m11.981s 00:12:36.092 user 0m15.094s 00:12:36.092 sys 0m1.428s 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.092 ************************************ 00:12:36.092 END TEST raid_rebuild_test_io 00:12:36.092 ************************************ 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.092 03:19:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:36.092 03:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:36.092 03:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.092 03:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.092 ************************************ 00:12:36.092 START TEST raid_rebuild_test_sb_io 00:12:36.092 ************************************ 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76666 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76666 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76666 ']' 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.092 03:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.092 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:36.092 Zero copy mechanism will not be used. 00:12:36.092 [2024-11-20 03:19:25.570383] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:36.092 [2024-11-20 03:19:25.570513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76666 ] 00:12:36.351 [2024-11-20 03:19:25.747023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.351 [2024-11-20 03:19:25.865983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.610 [2024-11-20 03:19:26.062918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.610 [2024-11-20 03:19:26.062980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.870 BaseBdev1_malloc 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.870 [2024-11-20 03:19:26.451675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.870 [2024-11-20 03:19:26.451761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.870 [2024-11-20 03:19:26.451781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:36.870 [2024-11-20 03:19:26.451793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.870 [2024-11-20 03:19:26.453894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.870 [2024-11-20 03:19:26.453935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.870 BaseBdev1 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.870 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 BaseBdev2_malloc 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 [2024-11-20 03:19:26.508540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:37.130 [2024-11-20 03:19:26.508604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.130 [2024-11-20 03:19:26.508630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:37.130 [2024-11-20 03:19:26.508643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.130 [2024-11-20 03:19:26.510680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.130 [2024-11-20 03:19:26.510720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.130 BaseBdev2 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 spare_malloc 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 spare_delay 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 [2024-11-20 03:19:26.588610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.130 [2024-11-20 03:19:26.588681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.130 [2024-11-20 03:19:26.588703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:37.130 [2024-11-20 03:19:26.588714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.130 [2024-11-20 03:19:26.591087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.130 [2024-11-20 03:19:26.591203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.130 spare 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.130 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 [2024-11-20 03:19:26.600669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.130 [2024-11-20 03:19:26.602507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.130 [2024-11-20 03:19:26.602700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:37.130 [2024-11-20 03:19:26.602720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.130 [2024-11-20 03:19:26.602995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:37.130 [2024-11-20 03:19:26.603192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:37.130 [2024-11-20 03:19:26.603202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:37.130 [2024-11-20 03:19:26.603384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.131 "name": "raid_bdev1", 00:12:37.131 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:37.131 "strip_size_kb": 0, 00:12:37.131 "state": "online", 00:12:37.131 "raid_level": "raid1", 00:12:37.131 "superblock": true, 00:12:37.131 "num_base_bdevs": 2, 00:12:37.131 "num_base_bdevs_discovered": 2, 00:12:37.131 "num_base_bdevs_operational": 2, 00:12:37.131 "base_bdevs_list": [ 00:12:37.131 { 00:12:37.131 "name": "BaseBdev1", 00:12:37.131 "uuid": "96e1fcd7-37c9-53bd-a0ed-5576873ef2ca", 00:12:37.131 "is_configured": true, 00:12:37.131 "data_offset": 2048, 00:12:37.131 "data_size": 63488 00:12:37.131 }, 00:12:37.131 { 00:12:37.131 "name": "BaseBdev2", 00:12:37.131 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:37.131 "is_configured": true, 00:12:37.131 "data_offset": 2048, 00:12:37.131 "data_size": 63488 00:12:37.131 } 00:12:37.131 ] 00:12:37.131 }' 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.131 03:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.699 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.700 [2024-11-20 03:19:27.060167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.700 [2024-11-20 03:19:27.155711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.700 "name": "raid_bdev1", 00:12:37.700 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:37.700 "strip_size_kb": 0, 00:12:37.700 "state": "online", 00:12:37.700 "raid_level": "raid1", 00:12:37.700 "superblock": true, 00:12:37.700 "num_base_bdevs": 2, 00:12:37.700 "num_base_bdevs_discovered": 1, 00:12:37.700 "num_base_bdevs_operational": 1, 00:12:37.700 "base_bdevs_list": [ 00:12:37.700 { 00:12:37.700 "name": null, 00:12:37.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.700 "is_configured": false, 00:12:37.700 "data_offset": 0, 00:12:37.700 "data_size": 63488 00:12:37.700 }, 00:12:37.700 { 00:12:37.700 "name": "BaseBdev2", 00:12:37.700 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:37.700 "is_configured": true, 00:12:37.700 "data_offset": 2048, 00:12:37.700 "data_size": 63488 00:12:37.700 } 00:12:37.700 ] 00:12:37.700 }' 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.700 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.700 [2024-11-20 03:19:27.247396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:37.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:37.700 Zero copy mechanism will not be used. 00:12:37.700 Running I/O for 60 seconds... 00:12:37.979 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.979 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.979 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.980 [2024-11-20 03:19:27.594473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.246 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.246 03:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:38.246 [2024-11-20 03:19:27.662200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:38.246 [2024-11-20 03:19:27.664400] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.246 [2024-11-20 03:19:27.791704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:38.246 [2024-11-20 03:19:27.792393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:38.505 [2024-11-20 03:19:28.012727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:38.505 [2024-11-20 03:19:28.013155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:38.763 176.00 IOPS, 528.00 MiB/s [2024-11-20T03:19:28.399Z] [2024-11-20 03:19:28.350551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:39.023 [2024-11-20 03:19:28.477744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.023 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.282 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.282 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.282 "name": "raid_bdev1", 00:12:39.282 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:39.282 "strip_size_kb": 0, 00:12:39.282 "state": "online", 00:12:39.282 "raid_level": "raid1", 00:12:39.282 "superblock": true, 00:12:39.282 "num_base_bdevs": 2, 00:12:39.282 "num_base_bdevs_discovered": 2, 00:12:39.282 "num_base_bdevs_operational": 2, 00:12:39.282 "process": { 00:12:39.282 "type": "rebuild", 00:12:39.282 "target": "spare", 00:12:39.283 "progress": { 00:12:39.283 "blocks": 10240, 00:12:39.283 "percent": 16 00:12:39.283 } 00:12:39.283 }, 00:12:39.283 "base_bdevs_list": [ 00:12:39.283 { 00:12:39.283 "name": "spare", 00:12:39.283 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:39.283 "is_configured": true, 00:12:39.283 "data_offset": 2048, 00:12:39.283 "data_size": 63488 00:12:39.283 }, 00:12:39.283 { 00:12:39.283 "name": "BaseBdev2", 00:12:39.283 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:39.283 "is_configured": true, 00:12:39.283 "data_offset": 2048, 00:12:39.283 "data_size": 63488 00:12:39.283 } 00:12:39.283 ] 00:12:39.283 }' 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.283 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.283 [2024-11-20 03:19:28.785531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.283 [2024-11-20 03:19:28.907598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:39.283 [2024-11-20 03:19:28.910600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.283 [2024-11-20 03:19:28.910690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.283 [2024-11-20 03:19:28.910705] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:39.542 [2024-11-20 03:19:28.953016] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.543 03:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.543 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.543 "name": "raid_bdev1", 00:12:39.543 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:39.543 "strip_size_kb": 0, 00:12:39.543 "state": "online", 00:12:39.543 "raid_level": "raid1", 00:12:39.543 "superblock": true, 00:12:39.543 "num_base_bdevs": 2, 00:12:39.543 "num_base_bdevs_discovered": 1, 00:12:39.543 "num_base_bdevs_operational": 1, 00:12:39.543 "base_bdevs_list": [ 00:12:39.543 { 00:12:39.543 "name": null, 00:12:39.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.543 "is_configured": false, 00:12:39.543 "data_offset": 0, 00:12:39.543 "data_size": 63488 00:12:39.543 }, 00:12:39.543 { 00:12:39.543 "name": "BaseBdev2", 00:12:39.543 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:39.543 "is_configured": true, 00:12:39.543 "data_offset": 2048, 00:12:39.543 "data_size": 63488 00:12:39.543 } 00:12:39.543 ] 00:12:39.543 }' 00:12:39.543 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.543 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.807 155.50 IOPS, 466.50 MiB/s [2024-11-20T03:19:29.442Z] 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.807 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.807 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.807 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.807 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.807 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.807 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.808 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.808 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.808 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.808 "name": "raid_bdev1", 00:12:39.808 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:39.808 "strip_size_kb": 0, 00:12:39.808 "state": "online", 00:12:39.808 "raid_level": "raid1", 00:12:39.808 "superblock": true, 00:12:39.808 "num_base_bdevs": 2, 00:12:39.808 "num_base_bdevs_discovered": 1, 00:12:39.808 "num_base_bdevs_operational": 1, 00:12:39.808 "base_bdevs_list": [ 00:12:39.808 { 00:12:39.808 "name": null, 00:12:39.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.808 "is_configured": false, 00:12:39.808 "data_offset": 0, 00:12:39.808 "data_size": 63488 00:12:39.808 }, 00:12:39.808 { 00:12:39.808 "name": "BaseBdev2", 00:12:39.808 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:39.808 "is_configured": true, 00:12:39.808 "data_offset": 2048, 00:12:39.808 "data_size": 63488 00:12:39.808 } 00:12:39.808 ] 00:12:39.808 }' 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.070 [2024-11-20 03:19:29.544200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.070 03:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:40.070 [2024-11-20 03:19:29.597778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:40.070 [2024-11-20 03:19:29.599672] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.329 [2024-11-20 03:19:29.707885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:40.329 [2024-11-20 03:19:29.708493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:40.329 [2024-11-20 03:19:29.910934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:40.329 [2024-11-20 03:19:29.911359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:40.899 [2024-11-20 03:19:30.244372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.899 [2024-11-20 03:19:30.244744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.899 163.67 IOPS, 491.00 MiB/s [2024-11-20T03:19:30.534Z] [2024-11-20 03:19:30.489646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.159 "name": "raid_bdev1", 00:12:41.159 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:41.159 "strip_size_kb": 0, 00:12:41.159 "state": "online", 00:12:41.159 "raid_level": "raid1", 00:12:41.159 "superblock": true, 00:12:41.159 "num_base_bdevs": 2, 00:12:41.159 "num_base_bdevs_discovered": 2, 00:12:41.159 "num_base_bdevs_operational": 2, 00:12:41.159 "process": { 00:12:41.159 "type": "rebuild", 00:12:41.159 "target": "spare", 00:12:41.159 "progress": { 00:12:41.159 "blocks": 14336, 00:12:41.159 "percent": 22 00:12:41.159 } 00:12:41.159 }, 00:12:41.159 "base_bdevs_list": [ 00:12:41.159 { 00:12:41.159 "name": "spare", 00:12:41.159 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:41.159 "is_configured": true, 00:12:41.159 "data_offset": 2048, 00:12:41.159 "data_size": 63488 00:12:41.159 }, 00:12:41.159 { 00:12:41.159 "name": "BaseBdev2", 00:12:41.159 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:41.159 "is_configured": true, 00:12:41.159 "data_offset": 2048, 00:12:41.159 "data_size": 63488 00:12:41.159 } 00:12:41.159 ] 00:12:41.159 }' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.159 [2024-11-20 03:19:30.699950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:41.159 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.159 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.159 "name": "raid_bdev1", 00:12:41.159 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:41.159 "strip_size_kb": 0, 00:12:41.159 "state": "online", 00:12:41.159 "raid_level": "raid1", 00:12:41.159 "superblock": true, 00:12:41.159 "num_base_bdevs": 2, 00:12:41.159 "num_base_bdevs_discovered": 2, 00:12:41.159 "num_base_bdevs_operational": 2, 00:12:41.159 "process": { 00:12:41.159 "type": "rebuild", 00:12:41.159 "target": "spare", 00:12:41.159 "progress": { 00:12:41.159 "blocks": 16384, 00:12:41.159 "percent": 25 00:12:41.159 } 00:12:41.159 }, 00:12:41.159 "base_bdevs_list": [ 00:12:41.159 { 00:12:41.159 "name": "spare", 00:12:41.159 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:41.159 "is_configured": true, 00:12:41.159 "data_offset": 2048, 00:12:41.159 "data_size": 63488 00:12:41.159 }, 00:12:41.159 { 00:12:41.159 "name": "BaseBdev2", 00:12:41.159 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:41.159 "is_configured": true, 00:12:41.159 "data_offset": 2048, 00:12:41.159 "data_size": 63488 00:12:41.159 } 00:12:41.159 ] 00:12:41.159 }' 00:12:41.419 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.419 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.419 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.419 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.419 03:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.419 [2024-11-20 03:19:31.025586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:41.419 [2024-11-20 03:19:31.026186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:41.678 [2024-11-20 03:19:31.243546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:41.678 [2024-11-20 03:19:31.244002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:42.247 139.50 IOPS, 418.50 MiB/s [2024-11-20T03:19:31.882Z] [2024-11-20 03:19:31.574846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:42.247 [2024-11-20 03:19:31.575442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:42.247 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.247 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.247 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.247 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.247 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.247 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.507 "name": "raid_bdev1", 00:12:42.507 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:42.507 "strip_size_kb": 0, 00:12:42.507 "state": "online", 00:12:42.507 "raid_level": "raid1", 00:12:42.507 "superblock": true, 00:12:42.507 "num_base_bdevs": 2, 00:12:42.507 "num_base_bdevs_discovered": 2, 00:12:42.507 "num_base_bdevs_operational": 2, 00:12:42.507 "process": { 00:12:42.507 "type": "rebuild", 00:12:42.507 "target": "spare", 00:12:42.507 "progress": { 00:12:42.507 "blocks": 28672, 00:12:42.507 "percent": 45 00:12:42.507 } 00:12:42.507 }, 00:12:42.507 "base_bdevs_list": [ 00:12:42.507 { 00:12:42.507 "name": "spare", 00:12:42.507 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:42.507 "is_configured": true, 00:12:42.507 "data_offset": 2048, 00:12:42.507 "data_size": 63488 00:12:42.507 }, 00:12:42.507 { 00:12:42.507 "name": "BaseBdev2", 00:12:42.507 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:42.507 "is_configured": true, 00:12:42.507 "data_offset": 2048, 00:12:42.507 "data_size": 63488 00:12:42.507 } 00:12:42.507 ] 00:12:42.507 }' 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.507 03:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.507 [2024-11-20 03:19:32.067445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:42.766 121.00 IOPS, 363.00 MiB/s [2024-11-20T03:19:32.401Z] [2024-11-20 03:19:32.281751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:43.026 [2024-11-20 03:19:32.591662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:43.595 [2024-11-20 03:19:33.002237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.596 "name": "raid_bdev1", 00:12:43.596 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:43.596 "strip_size_kb": 0, 00:12:43.596 "state": "online", 00:12:43.596 "raid_level": "raid1", 00:12:43.596 "superblock": true, 00:12:43.596 "num_base_bdevs": 2, 00:12:43.596 "num_base_bdevs_discovered": 2, 00:12:43.596 "num_base_bdevs_operational": 2, 00:12:43.596 "process": { 00:12:43.596 "type": "rebuild", 00:12:43.596 "target": "spare", 00:12:43.596 "progress": { 00:12:43.596 "blocks": 45056, 00:12:43.596 "percent": 70 00:12:43.596 } 00:12:43.596 }, 00:12:43.596 "base_bdevs_list": [ 00:12:43.596 { 00:12:43.596 "name": "spare", 00:12:43.596 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:43.596 "is_configured": true, 00:12:43.596 "data_offset": 2048, 00:12:43.596 "data_size": 63488 00:12:43.596 }, 00:12:43.596 { 00:12:43.596 "name": "BaseBdev2", 00:12:43.596 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:43.596 "is_configured": true, 00:12:43.596 "data_offset": 2048, 00:12:43.596 "data_size": 63488 00:12:43.596 } 00:12:43.596 ] 00:12:43.596 }' 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.596 [2024-11-20 03:19:33.109751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:43.596 [2024-11-20 03:19:33.110173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.596 03:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.437 110.00 IOPS, 330.00 MiB/s [2024-11-20T03:19:34.072Z] [2024-11-20 03:19:33.783961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:44.437 [2024-11-20 03:19:33.993238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.712 "name": "raid_bdev1", 00:12:44.712 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:44.712 "strip_size_kb": 0, 00:12:44.712 "state": "online", 00:12:44.712 "raid_level": "raid1", 00:12:44.712 "superblock": true, 00:12:44.712 "num_base_bdevs": 2, 00:12:44.712 "num_base_bdevs_discovered": 2, 00:12:44.712 "num_base_bdevs_operational": 2, 00:12:44.712 "process": { 00:12:44.712 "type": "rebuild", 00:12:44.712 "target": "spare", 00:12:44.712 "progress": { 00:12:44.712 "blocks": 61440, 00:12:44.712 "percent": 96 00:12:44.712 } 00:12:44.712 }, 00:12:44.712 "base_bdevs_list": [ 00:12:44.712 { 00:12:44.712 "name": "spare", 00:12:44.712 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:44.712 "is_configured": true, 00:12:44.712 "data_offset": 2048, 00:12:44.712 "data_size": 63488 00:12:44.712 }, 00:12:44.712 { 00:12:44.712 "name": "BaseBdev2", 00:12:44.712 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:44.712 "is_configured": true, 00:12:44.712 "data_offset": 2048, 00:12:44.712 "data_size": 63488 00:12:44.712 } 00:12:44.712 ] 00:12:44.712 }' 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.712 [2024-11-20 03:19:34.217562] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.712 99.71 IOPS, 299.14 MiB/s [2024-11-20T03:19:34.347Z] 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.712 03:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.712 [2024-11-20 03:19:34.317438] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:44.712 [2024-11-20 03:19:34.319796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.652 92.00 IOPS, 276.00 MiB/s [2024-11-20T03:19:35.287Z] 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.652 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.911 "name": "raid_bdev1", 00:12:45.911 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:45.911 "strip_size_kb": 0, 00:12:45.911 "state": "online", 00:12:45.911 "raid_level": "raid1", 00:12:45.911 "superblock": true, 00:12:45.911 "num_base_bdevs": 2, 00:12:45.911 "num_base_bdevs_discovered": 2, 00:12:45.911 "num_base_bdevs_operational": 2, 00:12:45.911 "base_bdevs_list": [ 00:12:45.911 { 00:12:45.911 "name": "spare", 00:12:45.911 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:45.911 "is_configured": true, 00:12:45.911 "data_offset": 2048, 00:12:45.911 "data_size": 63488 00:12:45.911 }, 00:12:45.911 { 00:12:45.911 "name": "BaseBdev2", 00:12:45.911 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:45.911 "is_configured": true, 00:12:45.911 "data_offset": 2048, 00:12:45.911 "data_size": 63488 00:12:45.911 } 00:12:45.911 ] 00:12:45.911 }' 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.911 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.911 "name": "raid_bdev1", 00:12:45.911 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:45.911 "strip_size_kb": 0, 00:12:45.911 "state": "online", 00:12:45.911 "raid_level": "raid1", 00:12:45.911 "superblock": true, 00:12:45.911 "num_base_bdevs": 2, 00:12:45.911 "num_base_bdevs_discovered": 2, 00:12:45.911 "num_base_bdevs_operational": 2, 00:12:45.911 "base_bdevs_list": [ 00:12:45.911 { 00:12:45.911 "name": "spare", 00:12:45.911 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:45.911 "is_configured": true, 00:12:45.911 "data_offset": 2048, 00:12:45.911 "data_size": 63488 00:12:45.911 }, 00:12:45.911 { 00:12:45.911 "name": "BaseBdev2", 00:12:45.911 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:45.911 "is_configured": true, 00:12:45.911 "data_offset": 2048, 00:12:45.911 "data_size": 63488 00:12:45.911 } 00:12:45.911 ] 00:12:45.911 }' 00:12:45.912 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.912 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.912 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.171 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.171 "name": "raid_bdev1", 00:12:46.171 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:46.171 "strip_size_kb": 0, 00:12:46.171 "state": "online", 00:12:46.171 "raid_level": "raid1", 00:12:46.171 "superblock": true, 00:12:46.171 "num_base_bdevs": 2, 00:12:46.171 "num_base_bdevs_discovered": 2, 00:12:46.171 "num_base_bdevs_operational": 2, 00:12:46.171 "base_bdevs_list": [ 00:12:46.171 { 00:12:46.171 "name": "spare", 00:12:46.171 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:46.171 "is_configured": true, 00:12:46.171 "data_offset": 2048, 00:12:46.171 "data_size": 63488 00:12:46.171 }, 00:12:46.171 { 00:12:46.171 "name": "BaseBdev2", 00:12:46.171 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:46.171 "is_configured": true, 00:12:46.172 "data_offset": 2048, 00:12:46.172 "data_size": 63488 00:12:46.172 } 00:12:46.172 ] 00:12:46.172 }' 00:12:46.172 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.172 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.433 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.433 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.433 03:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.433 [2024-11-20 03:19:35.990367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.433 [2024-11-20 03:19:35.990481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.433 00:12:46.433 Latency(us) 00:12:46.433 [2024-11-20T03:19:36.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.433 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:46.433 raid_bdev1 : 8.82 87.89 263.68 0.00 0.00 15115.22 313.01 109894.43 00:12:46.433 [2024-11-20T03:19:36.068Z] =================================================================================================================== 00:12:46.433 [2024-11-20T03:19:36.068Z] Total : 87.89 263.68 0.00 0.00 15115.22 313.01 109894.43 00:12:46.694 [2024-11-20 03:19:36.075745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.694 [2024-11-20 03:19:36.075852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.694 [2024-11-20 03:19:36.075958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.694 [2024-11-20 03:19:36.076004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:46.694 { 00:12:46.694 "results": [ 00:12:46.694 { 00:12:46.694 "job": "raid_bdev1", 00:12:46.694 "core_mask": "0x1", 00:12:46.694 "workload": "randrw", 00:12:46.694 "percentage": 50, 00:12:46.694 "status": "finished", 00:12:46.694 "queue_depth": 2, 00:12:46.694 "io_size": 3145728, 00:12:46.694 "runtime": 8.81765, 00:12:46.694 "iops": 87.89189863512387, 00:12:46.694 "mibps": 263.6756959053716, 00:12:46.694 "io_failed": 0, 00:12:46.694 "io_timeout": 0, 00:12:46.694 "avg_latency_us": 15115.220870545149, 00:12:46.694 "min_latency_us": 313.0131004366812, 00:12:46.694 "max_latency_us": 109894.42794759825 00:12:46.694 } 00:12:46.694 ], 00:12:46.694 "core_count": 1 00:12:46.694 } 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.694 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:46.955 /dev/nbd0 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.955 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.956 1+0 records in 00:12:46.956 1+0 records out 00:12:46.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366514 s, 11.2 MB/s 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.956 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:47.216 /dev/nbd1 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.217 1+0 records in 00:12:47.217 1+0 records out 00:12:47.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510727 s, 8.0 MB/s 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.217 03:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.477 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.738 [2024-11-20 03:19:37.345990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:47.738 [2024-11-20 03:19:37.346124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.738 [2024-11-20 03:19:37.346181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:47.738 [2024-11-20 03:19:37.346233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.738 [2024-11-20 03:19:37.348867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.738 [2024-11-20 03:19:37.348954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:47.738 [2024-11-20 03:19:37.349197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:47.738 [2024-11-20 03:19:37.349337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.738 [2024-11-20 03:19:37.349592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.738 spare 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.738 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.998 [2024-11-20 03:19:37.449634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:47.998 [2024-11-20 03:19:37.449752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.998 [2024-11-20 03:19:37.450142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:47.998 [2024-11-20 03:19:37.450415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:47.998 [2024-11-20 03:19:37.450465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:47.998 [2024-11-20 03:19:37.450765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.998 "name": "raid_bdev1", 00:12:47.998 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:47.998 "strip_size_kb": 0, 00:12:47.998 "state": "online", 00:12:47.998 "raid_level": "raid1", 00:12:47.998 "superblock": true, 00:12:47.998 "num_base_bdevs": 2, 00:12:47.998 "num_base_bdevs_discovered": 2, 00:12:47.998 "num_base_bdevs_operational": 2, 00:12:47.998 "base_bdevs_list": [ 00:12:47.998 { 00:12:47.998 "name": "spare", 00:12:47.998 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:47.998 "is_configured": true, 00:12:47.998 "data_offset": 2048, 00:12:47.998 "data_size": 63488 00:12:47.998 }, 00:12:47.998 { 00:12:47.998 "name": "BaseBdev2", 00:12:47.998 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:47.998 "is_configured": true, 00:12:47.998 "data_offset": 2048, 00:12:47.998 "data_size": 63488 00:12:47.998 } 00:12:47.998 ] 00:12:47.998 }' 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.998 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.568 "name": "raid_bdev1", 00:12:48.568 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:48.568 "strip_size_kb": 0, 00:12:48.568 "state": "online", 00:12:48.568 "raid_level": "raid1", 00:12:48.568 "superblock": true, 00:12:48.568 "num_base_bdevs": 2, 00:12:48.568 "num_base_bdevs_discovered": 2, 00:12:48.568 "num_base_bdevs_operational": 2, 00:12:48.568 "base_bdevs_list": [ 00:12:48.568 { 00:12:48.568 "name": "spare", 00:12:48.568 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:48.568 "is_configured": true, 00:12:48.568 "data_offset": 2048, 00:12:48.568 "data_size": 63488 00:12:48.568 }, 00:12:48.568 { 00:12:48.568 "name": "BaseBdev2", 00:12:48.568 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:48.568 "is_configured": true, 00:12:48.568 "data_offset": 2048, 00:12:48.568 "data_size": 63488 00:12:48.568 } 00:12:48.568 ] 00:12:48.568 }' 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.568 03:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.568 [2024-11-20 03:19:38.097749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.568 "name": "raid_bdev1", 00:12:48.568 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:48.568 "strip_size_kb": 0, 00:12:48.568 "state": "online", 00:12:48.568 "raid_level": "raid1", 00:12:48.568 "superblock": true, 00:12:48.568 "num_base_bdevs": 2, 00:12:48.568 "num_base_bdevs_discovered": 1, 00:12:48.568 "num_base_bdevs_operational": 1, 00:12:48.568 "base_bdevs_list": [ 00:12:48.568 { 00:12:48.568 "name": null, 00:12:48.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.568 "is_configured": false, 00:12:48.568 "data_offset": 0, 00:12:48.568 "data_size": 63488 00:12:48.568 }, 00:12:48.568 { 00:12:48.568 "name": "BaseBdev2", 00:12:48.568 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:48.568 "is_configured": true, 00:12:48.568 "data_offset": 2048, 00:12:48.568 "data_size": 63488 00:12:48.568 } 00:12:48.568 ] 00:12:48.568 }' 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.568 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.136 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.136 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.136 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.136 [2024-11-20 03:19:38.600959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.136 [2024-11-20 03:19:38.601168] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:49.136 [2024-11-20 03:19:38.601188] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:49.136 [2024-11-20 03:19:38.601230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.136 [2024-11-20 03:19:38.618324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:49.136 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.136 03:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:49.136 [2024-11-20 03:19:38.620373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.084 "name": "raid_bdev1", 00:12:50.084 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:50.084 "strip_size_kb": 0, 00:12:50.084 "state": "online", 00:12:50.084 "raid_level": "raid1", 00:12:50.084 "superblock": true, 00:12:50.084 "num_base_bdevs": 2, 00:12:50.084 "num_base_bdevs_discovered": 2, 00:12:50.084 "num_base_bdevs_operational": 2, 00:12:50.084 "process": { 00:12:50.084 "type": "rebuild", 00:12:50.084 "target": "spare", 00:12:50.084 "progress": { 00:12:50.084 "blocks": 20480, 00:12:50.084 "percent": 32 00:12:50.084 } 00:12:50.084 }, 00:12:50.084 "base_bdevs_list": [ 00:12:50.084 { 00:12:50.084 "name": "spare", 00:12:50.084 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:50.084 "is_configured": true, 00:12:50.084 "data_offset": 2048, 00:12:50.084 "data_size": 63488 00:12:50.084 }, 00:12:50.084 { 00:12:50.084 "name": "BaseBdev2", 00:12:50.084 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:50.084 "is_configured": true, 00:12:50.084 "data_offset": 2048, 00:12:50.084 "data_size": 63488 00:12:50.084 } 00:12:50.084 ] 00:12:50.084 }' 00:12:50.084 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.344 [2024-11-20 03:19:39.767863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.344 [2024-11-20 03:19:39.826048] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.344 [2024-11-20 03:19:39.826192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.344 [2024-11-20 03:19:39.826210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.344 [2024-11-20 03:19:39.826220] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.344 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.344 "name": "raid_bdev1", 00:12:50.344 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:50.344 "strip_size_kb": 0, 00:12:50.344 "state": "online", 00:12:50.344 "raid_level": "raid1", 00:12:50.344 "superblock": true, 00:12:50.344 "num_base_bdevs": 2, 00:12:50.344 "num_base_bdevs_discovered": 1, 00:12:50.344 "num_base_bdevs_operational": 1, 00:12:50.344 "base_bdevs_list": [ 00:12:50.344 { 00:12:50.344 "name": null, 00:12:50.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.344 "is_configured": false, 00:12:50.344 "data_offset": 0, 00:12:50.344 "data_size": 63488 00:12:50.344 }, 00:12:50.344 { 00:12:50.344 "name": "BaseBdev2", 00:12:50.345 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:50.345 "is_configured": true, 00:12:50.345 "data_offset": 2048, 00:12:50.345 "data_size": 63488 00:12:50.345 } 00:12:50.345 ] 00:12:50.345 }' 00:12:50.345 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.345 03:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.912 03:19:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:50.912 03:19:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.912 03:19:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.912 [2024-11-20 03:19:40.336542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:50.912 [2024-11-20 03:19:40.336704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.912 [2024-11-20 03:19:40.336752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:50.912 [2024-11-20 03:19:40.336783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.912 [2024-11-20 03:19:40.337340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.912 [2024-11-20 03:19:40.337407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:50.912 [2024-11-20 03:19:40.337536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:50.912 [2024-11-20 03:19:40.337583] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:50.912 [2024-11-20 03:19:40.337640] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:50.912 [2024-11-20 03:19:40.337697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.912 [2024-11-20 03:19:40.354745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:50.912 spare 00:12:50.912 03:19:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.912 03:19:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:50.912 [2024-11-20 03:19:40.356888] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.859 "name": "raid_bdev1", 00:12:51.859 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:51.859 "strip_size_kb": 0, 00:12:51.859 "state": "online", 00:12:51.859 "raid_level": "raid1", 00:12:51.859 "superblock": true, 00:12:51.859 "num_base_bdevs": 2, 00:12:51.859 "num_base_bdevs_discovered": 2, 00:12:51.859 "num_base_bdevs_operational": 2, 00:12:51.859 "process": { 00:12:51.859 "type": "rebuild", 00:12:51.859 "target": "spare", 00:12:51.859 "progress": { 00:12:51.859 "blocks": 20480, 00:12:51.859 "percent": 32 00:12:51.859 } 00:12:51.859 }, 00:12:51.859 "base_bdevs_list": [ 00:12:51.859 { 00:12:51.859 "name": "spare", 00:12:51.859 "uuid": "fb13dc44-aff5-5823-aa08-4e95658f4bb0", 00:12:51.859 "is_configured": true, 00:12:51.859 "data_offset": 2048, 00:12:51.859 "data_size": 63488 00:12:51.859 }, 00:12:51.859 { 00:12:51.859 "name": "BaseBdev2", 00:12:51.859 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:51.859 "is_configured": true, 00:12:51.859 "data_offset": 2048, 00:12:51.859 "data_size": 63488 00:12:51.859 } 00:12:51.859 ] 00:12:51.859 }' 00:12:51.859 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.860 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.860 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.121 [2024-11-20 03:19:41.516444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.121 [2024-11-20 03:19:41.562606] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.121 [2024-11-20 03:19:41.562735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.121 [2024-11-20 03:19:41.562755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.121 [2024-11-20 03:19:41.562763] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.121 "name": "raid_bdev1", 00:12:52.121 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:52.121 "strip_size_kb": 0, 00:12:52.121 "state": "online", 00:12:52.121 "raid_level": "raid1", 00:12:52.121 "superblock": true, 00:12:52.121 "num_base_bdevs": 2, 00:12:52.121 "num_base_bdevs_discovered": 1, 00:12:52.121 "num_base_bdevs_operational": 1, 00:12:52.121 "base_bdevs_list": [ 00:12:52.121 { 00:12:52.121 "name": null, 00:12:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.121 "is_configured": false, 00:12:52.121 "data_offset": 0, 00:12:52.121 "data_size": 63488 00:12:52.121 }, 00:12:52.121 { 00:12:52.121 "name": "BaseBdev2", 00:12:52.121 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:52.121 "is_configured": true, 00:12:52.121 "data_offset": 2048, 00:12:52.121 "data_size": 63488 00:12:52.121 } 00:12:52.121 ] 00:12:52.121 }' 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.121 03:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.691 "name": "raid_bdev1", 00:12:52.691 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:52.691 "strip_size_kb": 0, 00:12:52.691 "state": "online", 00:12:52.691 "raid_level": "raid1", 00:12:52.691 "superblock": true, 00:12:52.691 "num_base_bdevs": 2, 00:12:52.691 "num_base_bdevs_discovered": 1, 00:12:52.691 "num_base_bdevs_operational": 1, 00:12:52.691 "base_bdevs_list": [ 00:12:52.691 { 00:12:52.691 "name": null, 00:12:52.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.691 "is_configured": false, 00:12:52.691 "data_offset": 0, 00:12:52.691 "data_size": 63488 00:12:52.691 }, 00:12:52.691 { 00:12:52.691 "name": "BaseBdev2", 00:12:52.691 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:52.691 "is_configured": true, 00:12:52.691 "data_offset": 2048, 00:12:52.691 "data_size": 63488 00:12:52.691 } 00:12:52.691 ] 00:12:52.691 }' 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.691 [2024-11-20 03:19:42.172763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:52.691 [2024-11-20 03:19:42.172822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.691 [2024-11-20 03:19:42.172844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:52.691 [2024-11-20 03:19:42.172853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.691 [2024-11-20 03:19:42.173285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.691 [2024-11-20 03:19:42.173301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:52.691 [2024-11-20 03:19:42.173381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:52.691 [2024-11-20 03:19:42.173395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:52.691 [2024-11-20 03:19:42.173406] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:52.691 [2024-11-20 03:19:42.173417] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:52.691 BaseBdev1 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.691 03:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.630 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.630 "name": "raid_bdev1", 00:12:53.630 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:53.631 "strip_size_kb": 0, 00:12:53.631 "state": "online", 00:12:53.631 "raid_level": "raid1", 00:12:53.631 "superblock": true, 00:12:53.631 "num_base_bdevs": 2, 00:12:53.631 "num_base_bdevs_discovered": 1, 00:12:53.631 "num_base_bdevs_operational": 1, 00:12:53.631 "base_bdevs_list": [ 00:12:53.631 { 00:12:53.631 "name": null, 00:12:53.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.631 "is_configured": false, 00:12:53.631 "data_offset": 0, 00:12:53.631 "data_size": 63488 00:12:53.631 }, 00:12:53.631 { 00:12:53.631 "name": "BaseBdev2", 00:12:53.631 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:53.631 "is_configured": true, 00:12:53.631 "data_offset": 2048, 00:12:53.631 "data_size": 63488 00:12:53.631 } 00:12:53.631 ] 00:12:53.631 }' 00:12:53.631 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.631 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.201 "name": "raid_bdev1", 00:12:54.201 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:54.201 "strip_size_kb": 0, 00:12:54.201 "state": "online", 00:12:54.201 "raid_level": "raid1", 00:12:54.201 "superblock": true, 00:12:54.201 "num_base_bdevs": 2, 00:12:54.201 "num_base_bdevs_discovered": 1, 00:12:54.201 "num_base_bdevs_operational": 1, 00:12:54.201 "base_bdevs_list": [ 00:12:54.201 { 00:12:54.201 "name": null, 00:12:54.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.201 "is_configured": false, 00:12:54.201 "data_offset": 0, 00:12:54.201 "data_size": 63488 00:12:54.201 }, 00:12:54.201 { 00:12:54.201 "name": "BaseBdev2", 00:12:54.201 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:54.201 "is_configured": true, 00:12:54.201 "data_offset": 2048, 00:12:54.201 "data_size": 63488 00:12:54.201 } 00:12:54.201 ] 00:12:54.201 }' 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.201 [2024-11-20 03:19:43.738323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.201 [2024-11-20 03:19:43.738560] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:54.201 [2024-11-20 03:19:43.738586] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:54.201 request: 00:12:54.201 { 00:12:54.201 "base_bdev": "BaseBdev1", 00:12:54.201 "raid_bdev": "raid_bdev1", 00:12:54.201 "method": "bdev_raid_add_base_bdev", 00:12:54.201 "req_id": 1 00:12:54.201 } 00:12:54.201 Got JSON-RPC error response 00:12:54.201 response: 00:12:54.201 { 00:12:54.201 "code": -22, 00:12:54.201 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:54.201 } 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.201 03:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.141 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.401 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.401 "name": "raid_bdev1", 00:12:55.401 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:55.401 "strip_size_kb": 0, 00:12:55.401 "state": "online", 00:12:55.401 "raid_level": "raid1", 00:12:55.401 "superblock": true, 00:12:55.401 "num_base_bdevs": 2, 00:12:55.401 "num_base_bdevs_discovered": 1, 00:12:55.401 "num_base_bdevs_operational": 1, 00:12:55.401 "base_bdevs_list": [ 00:12:55.401 { 00:12:55.401 "name": null, 00:12:55.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.401 "is_configured": false, 00:12:55.401 "data_offset": 0, 00:12:55.401 "data_size": 63488 00:12:55.401 }, 00:12:55.401 { 00:12:55.401 "name": "BaseBdev2", 00:12:55.401 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:55.401 "is_configured": true, 00:12:55.401 "data_offset": 2048, 00:12:55.401 "data_size": 63488 00:12:55.401 } 00:12:55.401 ] 00:12:55.401 }' 00:12:55.401 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.401 03:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.661 "name": "raid_bdev1", 00:12:55.661 "uuid": "4174be56-27c1-40b6-8e24-620eaef05b9d", 00:12:55.661 "strip_size_kb": 0, 00:12:55.661 "state": "online", 00:12:55.661 "raid_level": "raid1", 00:12:55.661 "superblock": true, 00:12:55.661 "num_base_bdevs": 2, 00:12:55.661 "num_base_bdevs_discovered": 1, 00:12:55.661 "num_base_bdevs_operational": 1, 00:12:55.661 "base_bdevs_list": [ 00:12:55.661 { 00:12:55.661 "name": null, 00:12:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.661 "is_configured": false, 00:12:55.661 "data_offset": 0, 00:12:55.661 "data_size": 63488 00:12:55.661 }, 00:12:55.661 { 00:12:55.661 "name": "BaseBdev2", 00:12:55.661 "uuid": "d0670655-4ec1-55d1-9613-408b51fc6e44", 00:12:55.661 "is_configured": true, 00:12:55.661 "data_offset": 2048, 00:12:55.661 "data_size": 63488 00:12:55.661 } 00:12:55.661 ] 00:12:55.661 }' 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.661 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76666 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76666 ']' 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76666 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76666 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76666' 00:12:55.921 killing process with pid 76666 00:12:55.921 Received shutdown signal, test time was about 18.162034 seconds 00:12:55.921 00:12:55.921 Latency(us) 00:12:55.921 [2024-11-20T03:19:45.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.921 [2024-11-20T03:19:45.556Z] =================================================================================================================== 00:12:55.921 [2024-11-20T03:19:45.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76666 00:12:55.921 [2024-11-20 03:19:45.376605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.921 [2024-11-20 03:19:45.376774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.921 [2024-11-20 03:19:45.376831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.921 [2024-11-20 03:19:45.376848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:55.921 03:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76666 00:12:56.180 [2024-11-20 03:19:45.611659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:57.561 00:12:57.561 real 0m21.303s 00:12:57.561 user 0m27.613s 00:12:57.561 sys 0m2.247s 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.561 ************************************ 00:12:57.561 END TEST raid_rebuild_test_sb_io 00:12:57.561 ************************************ 00:12:57.561 03:19:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:57.561 03:19:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:57.561 03:19:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:57.561 03:19:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.561 03:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.561 ************************************ 00:12:57.561 START TEST raid_rebuild_test 00:12:57.561 ************************************ 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77374 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77374 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:57.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77374 ']' 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.561 03:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.561 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.561 Zero copy mechanism will not be used. 00:12:57.561 [2024-11-20 03:19:46.946772] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:57.561 [2024-11-20 03:19:46.946892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77374 ] 00:12:57.561 [2024-11-20 03:19:47.121657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.821 [2024-11-20 03:19:47.240443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.821 [2024-11-20 03:19:47.446484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.821 [2024-11-20 03:19:47.446523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.392 BaseBdev1_malloc 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.392 [2024-11-20 03:19:47.846062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:58.392 [2024-11-20 03:19:47.846208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.392 [2024-11-20 03:19:47.846278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:58.392 [2024-11-20 03:19:47.846326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.392 [2024-11-20 03:19:47.848891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.392 [2024-11-20 03:19:47.848986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:58.392 BaseBdev1 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.392 BaseBdev2_malloc 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.392 [2024-11-20 03:19:47.902677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:58.392 [2024-11-20 03:19:47.902746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.392 [2024-11-20 03:19:47.902766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:58.392 [2024-11-20 03:19:47.902779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.392 [2024-11-20 03:19:47.904895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.392 [2024-11-20 03:19:47.904937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:58.392 BaseBdev2 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.392 BaseBdev3_malloc 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.392 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.392 [2024-11-20 03:19:47.963647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:58.392 [2024-11-20 03:19:47.963714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.393 [2024-11-20 03:19:47.963739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.393 [2024-11-20 03:19:47.963750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.393 [2024-11-20 03:19:47.965953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.393 [2024-11-20 03:19:47.965998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:58.393 BaseBdev3 00:12:58.393 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.393 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.393 03:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:58.393 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.393 03:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.393 BaseBdev4_malloc 00:12:58.393 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.393 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:58.393 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.393 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.393 [2024-11-20 03:19:48.019666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:58.393 [2024-11-20 03:19:48.019736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.393 [2024-11-20 03:19:48.019758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:58.393 [2024-11-20 03:19:48.019771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.393 [2024-11-20 03:19:48.022169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.393 [2024-11-20 03:19:48.022211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:58.652 BaseBdev4 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.652 spare_malloc 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.652 spare_delay 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.652 [2024-11-20 03:19:48.086545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.652 [2024-11-20 03:19:48.086624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.652 [2024-11-20 03:19:48.086645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:58.652 [2024-11-20 03:19:48.086656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.652 [2024-11-20 03:19:48.088736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.652 [2024-11-20 03:19:48.088774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.652 spare 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.652 [2024-11-20 03:19:48.098577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.652 [2024-11-20 03:19:48.100445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.652 [2024-11-20 03:19:48.100574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.652 [2024-11-20 03:19:48.100671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:58.652 [2024-11-20 03:19:48.100767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:58.652 [2024-11-20 03:19:48.100782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:58.652 [2024-11-20 03:19:48.101087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:58.652 [2024-11-20 03:19:48.101290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:58.652 [2024-11-20 03:19:48.101303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:58.652 [2024-11-20 03:19:48.101477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.652 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.653 "name": "raid_bdev1", 00:12:58.653 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:12:58.653 "strip_size_kb": 0, 00:12:58.653 "state": "online", 00:12:58.653 "raid_level": "raid1", 00:12:58.653 "superblock": false, 00:12:58.653 "num_base_bdevs": 4, 00:12:58.653 "num_base_bdevs_discovered": 4, 00:12:58.653 "num_base_bdevs_operational": 4, 00:12:58.653 "base_bdevs_list": [ 00:12:58.653 { 00:12:58.653 "name": "BaseBdev1", 00:12:58.653 "uuid": "09803bc0-669b-5198-b45a-962c6ba16152", 00:12:58.653 "is_configured": true, 00:12:58.653 "data_offset": 0, 00:12:58.653 "data_size": 65536 00:12:58.653 }, 00:12:58.653 { 00:12:58.653 "name": "BaseBdev2", 00:12:58.653 "uuid": "befb4a39-a645-5f94-bd30-bbaaac20cb9f", 00:12:58.653 "is_configured": true, 00:12:58.653 "data_offset": 0, 00:12:58.653 "data_size": 65536 00:12:58.653 }, 00:12:58.653 { 00:12:58.653 "name": "BaseBdev3", 00:12:58.653 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:12:58.653 "is_configured": true, 00:12:58.653 "data_offset": 0, 00:12:58.653 "data_size": 65536 00:12:58.653 }, 00:12:58.653 { 00:12:58.653 "name": "BaseBdev4", 00:12:58.653 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:12:58.653 "is_configured": true, 00:12:58.653 "data_offset": 0, 00:12:58.653 "data_size": 65536 00:12:58.653 } 00:12:58.653 ] 00:12:58.653 }' 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.653 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 [2024-11-20 03:19:48.590161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:59.221 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.222 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.222 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:59.222 [2024-11-20 03:19:48.845444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:59.481 /dev/nbd0 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.481 1+0 records in 00:12:59.481 1+0 records out 00:12:59.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511482 s, 8.0 MB/s 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:59.481 03:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:04.760 65536+0 records in 00:13:04.760 65536+0 records out 00:13:04.760 33554432 bytes (34 MB, 32 MiB) copied, 5.41991 s, 6.2 MB/s 00:13:04.760 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.760 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.760 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.761 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.761 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:04.761 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.761 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.020 [2024-11-20 03:19:54.546671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.020 [2024-11-20 03:19:54.562781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.020 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.020 "name": "raid_bdev1", 00:13:05.020 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:05.020 "strip_size_kb": 0, 00:13:05.020 "state": "online", 00:13:05.020 "raid_level": "raid1", 00:13:05.020 "superblock": false, 00:13:05.020 "num_base_bdevs": 4, 00:13:05.020 "num_base_bdevs_discovered": 3, 00:13:05.020 "num_base_bdevs_operational": 3, 00:13:05.020 "base_bdevs_list": [ 00:13:05.020 { 00:13:05.020 "name": null, 00:13:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.020 "is_configured": false, 00:13:05.020 "data_offset": 0, 00:13:05.020 "data_size": 65536 00:13:05.020 }, 00:13:05.020 { 00:13:05.021 "name": "BaseBdev2", 00:13:05.021 "uuid": "befb4a39-a645-5f94-bd30-bbaaac20cb9f", 00:13:05.021 "is_configured": true, 00:13:05.021 "data_offset": 0, 00:13:05.021 "data_size": 65536 00:13:05.021 }, 00:13:05.021 { 00:13:05.021 "name": "BaseBdev3", 00:13:05.021 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:05.021 "is_configured": true, 00:13:05.021 "data_offset": 0, 00:13:05.021 "data_size": 65536 00:13:05.021 }, 00:13:05.021 { 00:13:05.021 "name": "BaseBdev4", 00:13:05.021 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:05.021 "is_configured": true, 00:13:05.021 "data_offset": 0, 00:13:05.021 "data_size": 65536 00:13:05.021 } 00:13:05.021 ] 00:13:05.021 }' 00:13:05.021 03:19:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.021 03:19:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 03:19:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.590 03:19:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.590 03:19:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.590 [2024-11-20 03:19:55.033985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.590 [2024-11-20 03:19:55.047132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:05.590 03:19:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.590 03:19:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:05.590 [2024-11-20 03:19:55.049009] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.527 "name": "raid_bdev1", 00:13:06.527 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:06.527 "strip_size_kb": 0, 00:13:06.527 "state": "online", 00:13:06.527 "raid_level": "raid1", 00:13:06.527 "superblock": false, 00:13:06.527 "num_base_bdevs": 4, 00:13:06.527 "num_base_bdevs_discovered": 4, 00:13:06.527 "num_base_bdevs_operational": 4, 00:13:06.527 "process": { 00:13:06.527 "type": "rebuild", 00:13:06.527 "target": "spare", 00:13:06.527 "progress": { 00:13:06.527 "blocks": 20480, 00:13:06.527 "percent": 31 00:13:06.527 } 00:13:06.527 }, 00:13:06.527 "base_bdevs_list": [ 00:13:06.527 { 00:13:06.527 "name": "spare", 00:13:06.527 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:06.527 "is_configured": true, 00:13:06.527 "data_offset": 0, 00:13:06.527 "data_size": 65536 00:13:06.527 }, 00:13:06.527 { 00:13:06.527 "name": "BaseBdev2", 00:13:06.527 "uuid": "befb4a39-a645-5f94-bd30-bbaaac20cb9f", 00:13:06.527 "is_configured": true, 00:13:06.527 "data_offset": 0, 00:13:06.527 "data_size": 65536 00:13:06.527 }, 00:13:06.527 { 00:13:06.527 "name": "BaseBdev3", 00:13:06.527 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:06.527 "is_configured": true, 00:13:06.527 "data_offset": 0, 00:13:06.527 "data_size": 65536 00:13:06.527 }, 00:13:06.527 { 00:13:06.527 "name": "BaseBdev4", 00:13:06.527 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:06.527 "is_configured": true, 00:13:06.527 "data_offset": 0, 00:13:06.527 "data_size": 65536 00:13:06.527 } 00:13:06.527 ] 00:13:06.527 }' 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.527 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.787 [2024-11-20 03:19:56.208960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.787 [2024-11-20 03:19:56.254706] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.787 [2024-11-20 03:19:56.254911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.787 [2024-11-20 03:19:56.254934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.787 [2024-11-20 03:19:56.254946] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.787 "name": "raid_bdev1", 00:13:06.787 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:06.787 "strip_size_kb": 0, 00:13:06.787 "state": "online", 00:13:06.787 "raid_level": "raid1", 00:13:06.787 "superblock": false, 00:13:06.787 "num_base_bdevs": 4, 00:13:06.787 "num_base_bdevs_discovered": 3, 00:13:06.787 "num_base_bdevs_operational": 3, 00:13:06.787 "base_bdevs_list": [ 00:13:06.787 { 00:13:06.787 "name": null, 00:13:06.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.787 "is_configured": false, 00:13:06.787 "data_offset": 0, 00:13:06.787 "data_size": 65536 00:13:06.787 }, 00:13:06.787 { 00:13:06.787 "name": "BaseBdev2", 00:13:06.787 "uuid": "befb4a39-a645-5f94-bd30-bbaaac20cb9f", 00:13:06.787 "is_configured": true, 00:13:06.787 "data_offset": 0, 00:13:06.787 "data_size": 65536 00:13:06.787 }, 00:13:06.787 { 00:13:06.787 "name": "BaseBdev3", 00:13:06.787 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:06.787 "is_configured": true, 00:13:06.787 "data_offset": 0, 00:13:06.787 "data_size": 65536 00:13:06.787 }, 00:13:06.787 { 00:13:06.787 "name": "BaseBdev4", 00:13:06.787 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:06.787 "is_configured": true, 00:13:06.787 "data_offset": 0, 00:13:06.787 "data_size": 65536 00:13:06.787 } 00:13:06.787 ] 00:13:06.787 }' 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.787 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.356 "name": "raid_bdev1", 00:13:07.356 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:07.356 "strip_size_kb": 0, 00:13:07.356 "state": "online", 00:13:07.356 "raid_level": "raid1", 00:13:07.356 "superblock": false, 00:13:07.356 "num_base_bdevs": 4, 00:13:07.356 "num_base_bdevs_discovered": 3, 00:13:07.356 "num_base_bdevs_operational": 3, 00:13:07.356 "base_bdevs_list": [ 00:13:07.356 { 00:13:07.356 "name": null, 00:13:07.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.356 "is_configured": false, 00:13:07.356 "data_offset": 0, 00:13:07.356 "data_size": 65536 00:13:07.356 }, 00:13:07.356 { 00:13:07.356 "name": "BaseBdev2", 00:13:07.356 "uuid": "befb4a39-a645-5f94-bd30-bbaaac20cb9f", 00:13:07.356 "is_configured": true, 00:13:07.356 "data_offset": 0, 00:13:07.356 "data_size": 65536 00:13:07.356 }, 00:13:07.356 { 00:13:07.356 "name": "BaseBdev3", 00:13:07.356 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:07.356 "is_configured": true, 00:13:07.356 "data_offset": 0, 00:13:07.356 "data_size": 65536 00:13:07.356 }, 00:13:07.356 { 00:13:07.356 "name": "BaseBdev4", 00:13:07.356 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:07.356 "is_configured": true, 00:13:07.356 "data_offset": 0, 00:13:07.356 "data_size": 65536 00:13:07.356 } 00:13:07.356 ] 00:13:07.356 }' 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.356 [2024-11-20 03:19:56.863759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.356 [2024-11-20 03:19:56.879635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.356 03:19:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:07.356 [2024-11-20 03:19:56.881602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.295 03:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.555 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.555 "name": "raid_bdev1", 00:13:08.555 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:08.555 "strip_size_kb": 0, 00:13:08.555 "state": "online", 00:13:08.555 "raid_level": "raid1", 00:13:08.555 "superblock": false, 00:13:08.555 "num_base_bdevs": 4, 00:13:08.555 "num_base_bdevs_discovered": 4, 00:13:08.555 "num_base_bdevs_operational": 4, 00:13:08.555 "process": { 00:13:08.555 "type": "rebuild", 00:13:08.555 "target": "spare", 00:13:08.555 "progress": { 00:13:08.555 "blocks": 20480, 00:13:08.555 "percent": 31 00:13:08.555 } 00:13:08.555 }, 00:13:08.555 "base_bdevs_list": [ 00:13:08.555 { 00:13:08.555 "name": "spare", 00:13:08.555 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 }, 00:13:08.555 { 00:13:08.555 "name": "BaseBdev2", 00:13:08.555 "uuid": "befb4a39-a645-5f94-bd30-bbaaac20cb9f", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 }, 00:13:08.555 { 00:13:08.555 "name": "BaseBdev3", 00:13:08.555 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 }, 00:13:08.555 { 00:13:08.555 "name": "BaseBdev4", 00:13:08.555 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 } 00:13:08.555 ] 00:13:08.555 }' 00:13:08.555 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.555 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.555 03:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.555 [2024-11-20 03:19:58.040935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:08.555 [2024-11-20 03:19:58.087223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.555 "name": "raid_bdev1", 00:13:08.555 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:08.555 "strip_size_kb": 0, 00:13:08.555 "state": "online", 00:13:08.555 "raid_level": "raid1", 00:13:08.555 "superblock": false, 00:13:08.555 "num_base_bdevs": 4, 00:13:08.555 "num_base_bdevs_discovered": 3, 00:13:08.555 "num_base_bdevs_operational": 3, 00:13:08.555 "process": { 00:13:08.555 "type": "rebuild", 00:13:08.555 "target": "spare", 00:13:08.555 "progress": { 00:13:08.555 "blocks": 24576, 00:13:08.555 "percent": 37 00:13:08.555 } 00:13:08.555 }, 00:13:08.555 "base_bdevs_list": [ 00:13:08.555 { 00:13:08.555 "name": "spare", 00:13:08.555 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 }, 00:13:08.555 { 00:13:08.555 "name": null, 00:13:08.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.555 "is_configured": false, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 }, 00:13:08.555 { 00:13:08.555 "name": "BaseBdev3", 00:13:08.555 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 }, 00:13:08.555 { 00:13:08.555 "name": "BaseBdev4", 00:13:08.555 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:08.555 "is_configured": true, 00:13:08.555 "data_offset": 0, 00:13:08.555 "data_size": 65536 00:13:08.555 } 00:13:08.555 ] 00:13:08.555 }' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.555 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.815 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.816 "name": "raid_bdev1", 00:13:08.816 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:08.816 "strip_size_kb": 0, 00:13:08.816 "state": "online", 00:13:08.816 "raid_level": "raid1", 00:13:08.816 "superblock": false, 00:13:08.816 "num_base_bdevs": 4, 00:13:08.816 "num_base_bdevs_discovered": 3, 00:13:08.816 "num_base_bdevs_operational": 3, 00:13:08.816 "process": { 00:13:08.816 "type": "rebuild", 00:13:08.816 "target": "spare", 00:13:08.816 "progress": { 00:13:08.816 "blocks": 26624, 00:13:08.816 "percent": 40 00:13:08.816 } 00:13:08.816 }, 00:13:08.816 "base_bdevs_list": [ 00:13:08.816 { 00:13:08.816 "name": "spare", 00:13:08.816 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:08.816 "is_configured": true, 00:13:08.816 "data_offset": 0, 00:13:08.816 "data_size": 65536 00:13:08.816 }, 00:13:08.816 { 00:13:08.816 "name": null, 00:13:08.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.816 "is_configured": false, 00:13:08.816 "data_offset": 0, 00:13:08.816 "data_size": 65536 00:13:08.816 }, 00:13:08.816 { 00:13:08.816 "name": "BaseBdev3", 00:13:08.816 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:08.816 "is_configured": true, 00:13:08.816 "data_offset": 0, 00:13:08.816 "data_size": 65536 00:13:08.816 }, 00:13:08.816 { 00:13:08.816 "name": "BaseBdev4", 00:13:08.816 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:08.816 "is_configured": true, 00:13:08.816 "data_offset": 0, 00:13:08.816 "data_size": 65536 00:13:08.816 } 00:13:08.816 ] 00:13:08.816 }' 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.816 03:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.761 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.761 "name": "raid_bdev1", 00:13:09.761 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:09.761 "strip_size_kb": 0, 00:13:09.761 "state": "online", 00:13:09.761 "raid_level": "raid1", 00:13:09.761 "superblock": false, 00:13:09.761 "num_base_bdevs": 4, 00:13:09.761 "num_base_bdevs_discovered": 3, 00:13:09.761 "num_base_bdevs_operational": 3, 00:13:09.761 "process": { 00:13:09.761 "type": "rebuild", 00:13:09.761 "target": "spare", 00:13:09.761 "progress": { 00:13:09.761 "blocks": 49152, 00:13:09.761 "percent": 75 00:13:09.761 } 00:13:09.761 }, 00:13:09.761 "base_bdevs_list": [ 00:13:09.761 { 00:13:09.761 "name": "spare", 00:13:09.761 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:09.761 "is_configured": true, 00:13:09.761 "data_offset": 0, 00:13:09.761 "data_size": 65536 00:13:09.761 }, 00:13:09.761 { 00:13:09.762 "name": null, 00:13:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.762 "is_configured": false, 00:13:09.762 "data_offset": 0, 00:13:09.762 "data_size": 65536 00:13:09.762 }, 00:13:09.762 { 00:13:09.762 "name": "BaseBdev3", 00:13:09.762 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:09.762 "is_configured": true, 00:13:09.762 "data_offset": 0, 00:13:09.762 "data_size": 65536 00:13:09.762 }, 00:13:09.762 { 00:13:09.762 "name": "BaseBdev4", 00:13:09.762 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:09.762 "is_configured": true, 00:13:09.762 "data_offset": 0, 00:13:09.762 "data_size": 65536 00:13:09.762 } 00:13:09.762 ] 00:13:09.762 }' 00:13:09.762 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.023 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.023 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.023 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.023 03:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.593 [2024-11-20 03:20:00.096553] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:10.593 [2024-11-20 03:20:00.096678] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:10.593 [2024-11-20 03:20:00.096728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.852 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.852 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.852 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.852 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.852 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.852 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.112 "name": "raid_bdev1", 00:13:11.112 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:11.112 "strip_size_kb": 0, 00:13:11.112 "state": "online", 00:13:11.112 "raid_level": "raid1", 00:13:11.112 "superblock": false, 00:13:11.112 "num_base_bdevs": 4, 00:13:11.112 "num_base_bdevs_discovered": 3, 00:13:11.112 "num_base_bdevs_operational": 3, 00:13:11.112 "base_bdevs_list": [ 00:13:11.112 { 00:13:11.112 "name": "spare", 00:13:11.112 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:11.112 "is_configured": true, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 }, 00:13:11.112 { 00:13:11.112 "name": null, 00:13:11.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.112 "is_configured": false, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 }, 00:13:11.112 { 00:13:11.112 "name": "BaseBdev3", 00:13:11.112 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:11.112 "is_configured": true, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 }, 00:13:11.112 { 00:13:11.112 "name": "BaseBdev4", 00:13:11.112 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:11.112 "is_configured": true, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 } 00:13:11.112 ] 00:13:11.112 }' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.112 "name": "raid_bdev1", 00:13:11.112 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:11.112 "strip_size_kb": 0, 00:13:11.112 "state": "online", 00:13:11.112 "raid_level": "raid1", 00:13:11.112 "superblock": false, 00:13:11.112 "num_base_bdevs": 4, 00:13:11.112 "num_base_bdevs_discovered": 3, 00:13:11.112 "num_base_bdevs_operational": 3, 00:13:11.112 "base_bdevs_list": [ 00:13:11.112 { 00:13:11.112 "name": "spare", 00:13:11.112 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:11.112 "is_configured": true, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 }, 00:13:11.112 { 00:13:11.112 "name": null, 00:13:11.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.112 "is_configured": false, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 }, 00:13:11.112 { 00:13:11.112 "name": "BaseBdev3", 00:13:11.112 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:11.112 "is_configured": true, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 }, 00:13:11.112 { 00:13:11.112 "name": "BaseBdev4", 00:13:11.112 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:11.112 "is_configured": true, 00:13:11.112 "data_offset": 0, 00:13:11.112 "data_size": 65536 00:13:11.112 } 00:13:11.112 ] 00:13:11.112 }' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.112 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.372 "name": "raid_bdev1", 00:13:11.372 "uuid": "42be0180-9943-4157-85bf-bdaefdfdf810", 00:13:11.372 "strip_size_kb": 0, 00:13:11.372 "state": "online", 00:13:11.372 "raid_level": "raid1", 00:13:11.372 "superblock": false, 00:13:11.372 "num_base_bdevs": 4, 00:13:11.372 "num_base_bdevs_discovered": 3, 00:13:11.372 "num_base_bdevs_operational": 3, 00:13:11.372 "base_bdevs_list": [ 00:13:11.372 { 00:13:11.372 "name": "spare", 00:13:11.372 "uuid": "26b3eae4-2842-5727-8dcf-1ea1cd6db8f5", 00:13:11.372 "is_configured": true, 00:13:11.372 "data_offset": 0, 00:13:11.372 "data_size": 65536 00:13:11.372 }, 00:13:11.372 { 00:13:11.372 "name": null, 00:13:11.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.372 "is_configured": false, 00:13:11.372 "data_offset": 0, 00:13:11.372 "data_size": 65536 00:13:11.372 }, 00:13:11.372 { 00:13:11.372 "name": "BaseBdev3", 00:13:11.372 "uuid": "3d2f0b05-9dfd-504f-a826-caed3cb55ba0", 00:13:11.372 "is_configured": true, 00:13:11.372 "data_offset": 0, 00:13:11.372 "data_size": 65536 00:13:11.372 }, 00:13:11.372 { 00:13:11.372 "name": "BaseBdev4", 00:13:11.372 "uuid": "8f08e81b-a1e3-5de7-abdc-ef5046b73df2", 00:13:11.372 "is_configured": true, 00:13:11.372 "data_offset": 0, 00:13:11.372 "data_size": 65536 00:13:11.372 } 00:13:11.372 ] 00:13:11.372 }' 00:13:11.372 03:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.373 03:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.632 [2024-11-20 03:20:01.232464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.632 [2024-11-20 03:20:01.232559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.632 [2024-11-20 03:20:01.232676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.632 [2024-11-20 03:20:01.232801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.632 [2024-11-20 03:20:01.232856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.632 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:11.890 /dev/nbd0 00:13:11.890 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.148 1+0 records in 00:13:12.148 1+0 records out 00:13:12.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261725 s, 15.7 MB/s 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:12.148 /dev/nbd1 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.148 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.407 1+0 records in 00:13:12.407 1+0 records out 00:13:12.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404928 s, 10.1 MB/s 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:12.407 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.408 03:20:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.667 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77374 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77374 ']' 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77374 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77374 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.927 killing process with pid 77374 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77374' 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77374 00:13:12.927 Received shutdown signal, test time was about 60.000000 seconds 00:13:12.927 00:13:12.927 Latency(us) 00:13:12.927 [2024-11-20T03:20:02.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.927 [2024-11-20T03:20:02.562Z] =================================================================================================================== 00:13:12.927 [2024-11-20T03:20:02.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:12.927 [2024-11-20 03:20:02.478769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.927 03:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77374 00:13:13.496 [2024-11-20 03:20:02.970074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.433 03:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.434 00:13:14.434 real 0m17.216s 00:13:14.434 user 0m19.537s 00:13:14.434 sys 0m3.003s 00:13:14.434 03:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.434 03:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.434 ************************************ 00:13:14.434 END TEST raid_rebuild_test 00:13:14.434 ************************************ 00:13:14.693 03:20:04 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:14.693 03:20:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.693 03:20:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.693 03:20:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.693 ************************************ 00:13:14.693 START TEST raid_rebuild_test_sb 00:13:14.693 ************************************ 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77819 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77819 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77819 ']' 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.693 03:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.693 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.693 Zero copy mechanism will not be used. 00:13:14.693 [2024-11-20 03:20:04.237500] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:14.693 [2024-11-20 03:20:04.237646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77819 ] 00:13:14.953 [2024-11-20 03:20:04.411489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.953 [2024-11-20 03:20:04.524132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.212 [2024-11-20 03:20:04.723731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.212 [2024-11-20 03:20:04.723802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.472 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.472 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:15.472 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.472 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.472 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.472 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 BaseBdev1_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 [2024-11-20 03:20:05.136765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.732 [2024-11-20 03:20:05.136834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.732 [2024-11-20 03:20:05.136857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.732 [2024-11-20 03:20:05.136869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.732 [2024-11-20 03:20:05.139021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.732 [2024-11-20 03:20:05.139061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.732 BaseBdev1 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 BaseBdev2_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 [2024-11-20 03:20:05.184113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.732 [2024-11-20 03:20:05.184176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.732 [2024-11-20 03:20:05.184196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.732 [2024-11-20 03:20:05.184209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.732 [2024-11-20 03:20:05.186270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.732 [2024-11-20 03:20:05.186306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.732 BaseBdev2 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 BaseBdev3_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 [2024-11-20 03:20:05.254097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:15.732 [2024-11-20 03:20:05.254157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.732 [2024-11-20 03:20:05.254179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:15.732 [2024-11-20 03:20:05.254191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.732 [2024-11-20 03:20:05.256413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.732 [2024-11-20 03:20:05.256452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:15.732 BaseBdev3 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 BaseBdev4_malloc 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.732 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 [2024-11-20 03:20:05.309934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:15.732 [2024-11-20 03:20:05.310013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.732 [2024-11-20 03:20:05.310034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:15.732 [2024-11-20 03:20:05.310045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.732 [2024-11-20 03:20:05.312248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.733 [2024-11-20 03:20:05.312293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:15.733 BaseBdev4 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 spare_malloc 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.992 spare_delay 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.993 [2024-11-20 03:20:05.374260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.993 [2024-11-20 03:20:05.374325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.993 [2024-11-20 03:20:05.374347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:15.993 [2024-11-20 03:20:05.374358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.993 [2024-11-20 03:20:05.376659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.993 [2024-11-20 03:20:05.376697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.993 spare 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.993 [2024-11-20 03:20:05.382287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.993 [2024-11-20 03:20:05.384212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.993 [2024-11-20 03:20:05.384282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.993 [2024-11-20 03:20:05.384333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.993 [2024-11-20 03:20:05.384508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:15.993 [2024-11-20 03:20:05.384531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.993 [2024-11-20 03:20:05.384803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:15.993 [2024-11-20 03:20:05.384983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:15.993 [2024-11-20 03:20:05.385000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:15.993 [2024-11-20 03:20:05.385149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.993 "name": "raid_bdev1", 00:13:15.993 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:15.993 "strip_size_kb": 0, 00:13:15.993 "state": "online", 00:13:15.993 "raid_level": "raid1", 00:13:15.993 "superblock": true, 00:13:15.993 "num_base_bdevs": 4, 00:13:15.993 "num_base_bdevs_discovered": 4, 00:13:15.993 "num_base_bdevs_operational": 4, 00:13:15.993 "base_bdevs_list": [ 00:13:15.993 { 00:13:15.993 "name": "BaseBdev1", 00:13:15.993 "uuid": "b25be4cd-b461-5727-9fc7-1fc09467546e", 00:13:15.993 "is_configured": true, 00:13:15.993 "data_offset": 2048, 00:13:15.993 "data_size": 63488 00:13:15.993 }, 00:13:15.993 { 00:13:15.993 "name": "BaseBdev2", 00:13:15.993 "uuid": "728c82e4-24e2-5001-b198-6dd44a8bbbdb", 00:13:15.993 "is_configured": true, 00:13:15.993 "data_offset": 2048, 00:13:15.993 "data_size": 63488 00:13:15.993 }, 00:13:15.993 { 00:13:15.993 "name": "BaseBdev3", 00:13:15.993 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:15.993 "is_configured": true, 00:13:15.993 "data_offset": 2048, 00:13:15.993 "data_size": 63488 00:13:15.993 }, 00:13:15.993 { 00:13:15.993 "name": "BaseBdev4", 00:13:15.993 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:15.993 "is_configured": true, 00:13:15.993 "data_offset": 2048, 00:13:15.993 "data_size": 63488 00:13:15.993 } 00:13:15.993 ] 00:13:15.993 }' 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.993 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:16.251 [2024-11-20 03:20:05.841918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.251 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.510 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:16.511 03:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:16.511 [2024-11-20 03:20:06.109095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:16.511 /dev/nbd0 00:13:16.511 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.770 1+0 records in 00:13:16.770 1+0 records out 00:13:16.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382743 s, 10.7 MB/s 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:16.770 03:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:22.039 63488+0 records in 00:13:22.039 63488+0 records out 00:13:22.039 32505856 bytes (33 MB, 31 MiB) copied, 5.30633 s, 6.1 MB/s 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.039 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:22.298 [2024-11-20 03:20:11.684550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.298 [2024-11-20 03:20:11.724586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.298 "name": "raid_bdev1", 00:13:22.298 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:22.298 "strip_size_kb": 0, 00:13:22.298 "state": "online", 00:13:22.298 "raid_level": "raid1", 00:13:22.298 "superblock": true, 00:13:22.298 "num_base_bdevs": 4, 00:13:22.298 "num_base_bdevs_discovered": 3, 00:13:22.298 "num_base_bdevs_operational": 3, 00:13:22.298 "base_bdevs_list": [ 00:13:22.298 { 00:13:22.298 "name": null, 00:13:22.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.298 "is_configured": false, 00:13:22.298 "data_offset": 0, 00:13:22.298 "data_size": 63488 00:13:22.298 }, 00:13:22.298 { 00:13:22.298 "name": "BaseBdev2", 00:13:22.298 "uuid": "728c82e4-24e2-5001-b198-6dd44a8bbbdb", 00:13:22.298 "is_configured": true, 00:13:22.298 "data_offset": 2048, 00:13:22.298 "data_size": 63488 00:13:22.298 }, 00:13:22.298 { 00:13:22.298 "name": "BaseBdev3", 00:13:22.298 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:22.298 "is_configured": true, 00:13:22.298 "data_offset": 2048, 00:13:22.298 "data_size": 63488 00:13:22.298 }, 00:13:22.298 { 00:13:22.298 "name": "BaseBdev4", 00:13:22.298 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:22.298 "is_configured": true, 00:13:22.298 "data_offset": 2048, 00:13:22.298 "data_size": 63488 00:13:22.298 } 00:13:22.298 ] 00:13:22.298 }' 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.298 03:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 03:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.865 03:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.865 03:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 [2024-11-20 03:20:12.195799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.866 [2024-11-20 03:20:12.211223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:22.866 03:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.866 03:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:22.866 [2024-11-20 03:20:12.213181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.802 "name": "raid_bdev1", 00:13:23.802 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:23.802 "strip_size_kb": 0, 00:13:23.802 "state": "online", 00:13:23.802 "raid_level": "raid1", 00:13:23.802 "superblock": true, 00:13:23.802 "num_base_bdevs": 4, 00:13:23.802 "num_base_bdevs_discovered": 4, 00:13:23.802 "num_base_bdevs_operational": 4, 00:13:23.802 "process": { 00:13:23.802 "type": "rebuild", 00:13:23.802 "target": "spare", 00:13:23.802 "progress": { 00:13:23.802 "blocks": 20480, 00:13:23.802 "percent": 32 00:13:23.802 } 00:13:23.802 }, 00:13:23.802 "base_bdevs_list": [ 00:13:23.802 { 00:13:23.802 "name": "spare", 00:13:23.802 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:23.802 "is_configured": true, 00:13:23.802 "data_offset": 2048, 00:13:23.802 "data_size": 63488 00:13:23.802 }, 00:13:23.802 { 00:13:23.802 "name": "BaseBdev2", 00:13:23.802 "uuid": "728c82e4-24e2-5001-b198-6dd44a8bbbdb", 00:13:23.802 "is_configured": true, 00:13:23.802 "data_offset": 2048, 00:13:23.802 "data_size": 63488 00:13:23.802 }, 00:13:23.802 { 00:13:23.802 "name": "BaseBdev3", 00:13:23.802 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:23.802 "is_configured": true, 00:13:23.802 "data_offset": 2048, 00:13:23.802 "data_size": 63488 00:13:23.802 }, 00:13:23.802 { 00:13:23.802 "name": "BaseBdev4", 00:13:23.802 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:23.802 "is_configured": true, 00:13:23.802 "data_offset": 2048, 00:13:23.802 "data_size": 63488 00:13:23.802 } 00:13:23.802 ] 00:13:23.802 }' 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.802 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.802 [2024-11-20 03:20:13.332474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.802 [2024-11-20 03:20:13.418724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.802 [2024-11-20 03:20:13.418797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.802 [2024-11-20 03:20:13.418814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.802 [2024-11-20 03:20:13.418824] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.061 "name": "raid_bdev1", 00:13:24.061 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:24.061 "strip_size_kb": 0, 00:13:24.061 "state": "online", 00:13:24.061 "raid_level": "raid1", 00:13:24.061 "superblock": true, 00:13:24.061 "num_base_bdevs": 4, 00:13:24.061 "num_base_bdevs_discovered": 3, 00:13:24.061 "num_base_bdevs_operational": 3, 00:13:24.061 "base_bdevs_list": [ 00:13:24.061 { 00:13:24.061 "name": null, 00:13:24.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.061 "is_configured": false, 00:13:24.061 "data_offset": 0, 00:13:24.061 "data_size": 63488 00:13:24.061 }, 00:13:24.061 { 00:13:24.061 "name": "BaseBdev2", 00:13:24.061 "uuid": "728c82e4-24e2-5001-b198-6dd44a8bbbdb", 00:13:24.061 "is_configured": true, 00:13:24.061 "data_offset": 2048, 00:13:24.061 "data_size": 63488 00:13:24.061 }, 00:13:24.061 { 00:13:24.061 "name": "BaseBdev3", 00:13:24.061 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:24.061 "is_configured": true, 00:13:24.061 "data_offset": 2048, 00:13:24.061 "data_size": 63488 00:13:24.061 }, 00:13:24.061 { 00:13:24.061 "name": "BaseBdev4", 00:13:24.061 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:24.061 "is_configured": true, 00:13:24.061 "data_offset": 2048, 00:13:24.061 "data_size": 63488 00:13:24.061 } 00:13:24.061 ] 00:13:24.061 }' 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.061 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.320 "name": "raid_bdev1", 00:13:24.320 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:24.320 "strip_size_kb": 0, 00:13:24.320 "state": "online", 00:13:24.320 "raid_level": "raid1", 00:13:24.320 "superblock": true, 00:13:24.320 "num_base_bdevs": 4, 00:13:24.320 "num_base_bdevs_discovered": 3, 00:13:24.320 "num_base_bdevs_operational": 3, 00:13:24.320 "base_bdevs_list": [ 00:13:24.320 { 00:13:24.320 "name": null, 00:13:24.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.320 "is_configured": false, 00:13:24.320 "data_offset": 0, 00:13:24.320 "data_size": 63488 00:13:24.320 }, 00:13:24.320 { 00:13:24.320 "name": "BaseBdev2", 00:13:24.320 "uuid": "728c82e4-24e2-5001-b198-6dd44a8bbbdb", 00:13:24.320 "is_configured": true, 00:13:24.320 "data_offset": 2048, 00:13:24.320 "data_size": 63488 00:13:24.320 }, 00:13:24.320 { 00:13:24.320 "name": "BaseBdev3", 00:13:24.320 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:24.320 "is_configured": true, 00:13:24.320 "data_offset": 2048, 00:13:24.320 "data_size": 63488 00:13:24.320 }, 00:13:24.320 { 00:13:24.320 "name": "BaseBdev4", 00:13:24.320 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:24.320 "is_configured": true, 00:13:24.320 "data_offset": 2048, 00:13:24.320 "data_size": 63488 00:13:24.320 } 00:13:24.320 ] 00:13:24.320 }' 00:13:24.320 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.579 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.579 03:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.579 03:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.579 03:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.579 03:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.579 03:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.579 [2024-11-20 03:20:14.009386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.579 [2024-11-20 03:20:14.024016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:24.579 03:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.579 03:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.579 [2024-11-20 03:20:14.026077] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.516 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.516 "name": "raid_bdev1", 00:13:25.516 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:25.516 "strip_size_kb": 0, 00:13:25.516 "state": "online", 00:13:25.516 "raid_level": "raid1", 00:13:25.516 "superblock": true, 00:13:25.516 "num_base_bdevs": 4, 00:13:25.517 "num_base_bdevs_discovered": 4, 00:13:25.517 "num_base_bdevs_operational": 4, 00:13:25.517 "process": { 00:13:25.517 "type": "rebuild", 00:13:25.517 "target": "spare", 00:13:25.517 "progress": { 00:13:25.517 "blocks": 20480, 00:13:25.517 "percent": 32 00:13:25.517 } 00:13:25.517 }, 00:13:25.517 "base_bdevs_list": [ 00:13:25.517 { 00:13:25.517 "name": "spare", 00:13:25.517 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:25.517 "is_configured": true, 00:13:25.517 "data_offset": 2048, 00:13:25.517 "data_size": 63488 00:13:25.517 }, 00:13:25.517 { 00:13:25.517 "name": "BaseBdev2", 00:13:25.517 "uuid": "728c82e4-24e2-5001-b198-6dd44a8bbbdb", 00:13:25.517 "is_configured": true, 00:13:25.517 "data_offset": 2048, 00:13:25.517 "data_size": 63488 00:13:25.517 }, 00:13:25.517 { 00:13:25.517 "name": "BaseBdev3", 00:13:25.517 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:25.517 "is_configured": true, 00:13:25.517 "data_offset": 2048, 00:13:25.517 "data_size": 63488 00:13:25.517 }, 00:13:25.517 { 00:13:25.517 "name": "BaseBdev4", 00:13:25.517 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:25.517 "is_configured": true, 00:13:25.517 "data_offset": 2048, 00:13:25.517 "data_size": 63488 00:13:25.517 } 00:13:25.517 ] 00:13:25.517 }' 00:13:25.517 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.517 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.517 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:25.776 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.776 [2024-11-20 03:20:15.193424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.776 [2024-11-20 03:20:15.331708] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.776 "name": "raid_bdev1", 00:13:25.776 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:25.776 "strip_size_kb": 0, 00:13:25.776 "state": "online", 00:13:25.776 "raid_level": "raid1", 00:13:25.776 "superblock": true, 00:13:25.776 "num_base_bdevs": 4, 00:13:25.776 "num_base_bdevs_discovered": 3, 00:13:25.776 "num_base_bdevs_operational": 3, 00:13:25.776 "process": { 00:13:25.776 "type": "rebuild", 00:13:25.776 "target": "spare", 00:13:25.776 "progress": { 00:13:25.776 "blocks": 24576, 00:13:25.776 "percent": 38 00:13:25.776 } 00:13:25.776 }, 00:13:25.776 "base_bdevs_list": [ 00:13:25.776 { 00:13:25.776 "name": "spare", 00:13:25.776 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:25.776 "is_configured": true, 00:13:25.776 "data_offset": 2048, 00:13:25.776 "data_size": 63488 00:13:25.776 }, 00:13:25.776 { 00:13:25.776 "name": null, 00:13:25.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.776 "is_configured": false, 00:13:25.776 "data_offset": 0, 00:13:25.776 "data_size": 63488 00:13:25.776 }, 00:13:25.776 { 00:13:25.776 "name": "BaseBdev3", 00:13:25.776 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:25.776 "is_configured": true, 00:13:25.776 "data_offset": 2048, 00:13:25.776 "data_size": 63488 00:13:25.776 }, 00:13:25.776 { 00:13:25.776 "name": "BaseBdev4", 00:13:25.776 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:25.776 "is_configured": true, 00:13:25.776 "data_offset": 2048, 00:13:25.776 "data_size": 63488 00:13:25.776 } 00:13:25.776 ] 00:13:25.776 }' 00:13:25.776 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.035 "name": "raid_bdev1", 00:13:26.035 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:26.035 "strip_size_kb": 0, 00:13:26.035 "state": "online", 00:13:26.035 "raid_level": "raid1", 00:13:26.035 "superblock": true, 00:13:26.035 "num_base_bdevs": 4, 00:13:26.035 "num_base_bdevs_discovered": 3, 00:13:26.035 "num_base_bdevs_operational": 3, 00:13:26.035 "process": { 00:13:26.035 "type": "rebuild", 00:13:26.035 "target": "spare", 00:13:26.035 "progress": { 00:13:26.035 "blocks": 26624, 00:13:26.035 "percent": 41 00:13:26.035 } 00:13:26.035 }, 00:13:26.035 "base_bdevs_list": [ 00:13:26.035 { 00:13:26.035 "name": "spare", 00:13:26.035 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:26.035 "is_configured": true, 00:13:26.035 "data_offset": 2048, 00:13:26.035 "data_size": 63488 00:13:26.035 }, 00:13:26.035 { 00:13:26.035 "name": null, 00:13:26.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.035 "is_configured": false, 00:13:26.035 "data_offset": 0, 00:13:26.035 "data_size": 63488 00:13:26.035 }, 00:13:26.035 { 00:13:26.035 "name": "BaseBdev3", 00:13:26.035 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:26.035 "is_configured": true, 00:13:26.035 "data_offset": 2048, 00:13:26.035 "data_size": 63488 00:13:26.035 }, 00:13:26.035 { 00:13:26.035 "name": "BaseBdev4", 00:13:26.035 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:26.035 "is_configured": true, 00:13:26.035 "data_offset": 2048, 00:13:26.035 "data_size": 63488 00:13:26.035 } 00:13:26.035 ] 00:13:26.035 }' 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.035 03:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.412 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.412 "name": "raid_bdev1", 00:13:27.412 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:27.412 "strip_size_kb": 0, 00:13:27.412 "state": "online", 00:13:27.412 "raid_level": "raid1", 00:13:27.412 "superblock": true, 00:13:27.412 "num_base_bdevs": 4, 00:13:27.412 "num_base_bdevs_discovered": 3, 00:13:27.412 "num_base_bdevs_operational": 3, 00:13:27.412 "process": { 00:13:27.412 "type": "rebuild", 00:13:27.412 "target": "spare", 00:13:27.412 "progress": { 00:13:27.412 "blocks": 51200, 00:13:27.412 "percent": 80 00:13:27.412 } 00:13:27.412 }, 00:13:27.412 "base_bdevs_list": [ 00:13:27.412 { 00:13:27.412 "name": "spare", 00:13:27.412 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:27.412 "is_configured": true, 00:13:27.412 "data_offset": 2048, 00:13:27.412 "data_size": 63488 00:13:27.412 }, 00:13:27.412 { 00:13:27.412 "name": null, 00:13:27.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.412 "is_configured": false, 00:13:27.412 "data_offset": 0, 00:13:27.412 "data_size": 63488 00:13:27.412 }, 00:13:27.412 { 00:13:27.412 "name": "BaseBdev3", 00:13:27.412 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:27.412 "is_configured": true, 00:13:27.412 "data_offset": 2048, 00:13:27.413 "data_size": 63488 00:13:27.413 }, 00:13:27.413 { 00:13:27.413 "name": "BaseBdev4", 00:13:27.413 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:27.413 "is_configured": true, 00:13:27.413 "data_offset": 2048, 00:13:27.413 "data_size": 63488 00:13:27.413 } 00:13:27.413 ] 00:13:27.413 }' 00:13:27.413 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.413 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.413 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.413 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.413 03:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.671 [2024-11-20 03:20:17.241308] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.671 [2024-11-20 03:20:17.241387] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.671 [2024-11-20 03:20:17.241524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.239 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.239 "name": "raid_bdev1", 00:13:28.239 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:28.239 "strip_size_kb": 0, 00:13:28.239 "state": "online", 00:13:28.239 "raid_level": "raid1", 00:13:28.239 "superblock": true, 00:13:28.239 "num_base_bdevs": 4, 00:13:28.239 "num_base_bdevs_discovered": 3, 00:13:28.239 "num_base_bdevs_operational": 3, 00:13:28.239 "base_bdevs_list": [ 00:13:28.239 { 00:13:28.239 "name": "spare", 00:13:28.239 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:28.239 "is_configured": true, 00:13:28.239 "data_offset": 2048, 00:13:28.239 "data_size": 63488 00:13:28.239 }, 00:13:28.239 { 00:13:28.239 "name": null, 00:13:28.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.239 "is_configured": false, 00:13:28.239 "data_offset": 0, 00:13:28.239 "data_size": 63488 00:13:28.239 }, 00:13:28.240 { 00:13:28.240 "name": "BaseBdev3", 00:13:28.240 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:28.240 "is_configured": true, 00:13:28.240 "data_offset": 2048, 00:13:28.240 "data_size": 63488 00:13:28.240 }, 00:13:28.240 { 00:13:28.240 "name": "BaseBdev4", 00:13:28.240 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:28.240 "is_configured": true, 00:13:28.240 "data_offset": 2048, 00:13:28.240 "data_size": 63488 00:13:28.240 } 00:13:28.240 ] 00:13:28.240 }' 00:13:28.240 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.499 "name": "raid_bdev1", 00:13:28.499 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:28.499 "strip_size_kb": 0, 00:13:28.499 "state": "online", 00:13:28.499 "raid_level": "raid1", 00:13:28.499 "superblock": true, 00:13:28.499 "num_base_bdevs": 4, 00:13:28.499 "num_base_bdevs_discovered": 3, 00:13:28.499 "num_base_bdevs_operational": 3, 00:13:28.499 "base_bdevs_list": [ 00:13:28.499 { 00:13:28.499 "name": "spare", 00:13:28.499 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": null, 00:13:28.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.499 "is_configured": false, 00:13:28.499 "data_offset": 0, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": "BaseBdev3", 00:13:28.499 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": "BaseBdev4", 00:13:28.499 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 } 00:13:28.499 ] 00:13:28.499 }' 00:13:28.499 03:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.499 "name": "raid_bdev1", 00:13:28.499 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:28.499 "strip_size_kb": 0, 00:13:28.499 "state": "online", 00:13:28.499 "raid_level": "raid1", 00:13:28.499 "superblock": true, 00:13:28.499 "num_base_bdevs": 4, 00:13:28.499 "num_base_bdevs_discovered": 3, 00:13:28.499 "num_base_bdevs_operational": 3, 00:13:28.499 "base_bdevs_list": [ 00:13:28.499 { 00:13:28.499 "name": "spare", 00:13:28.499 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": null, 00:13:28.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.499 "is_configured": false, 00:13:28.499 "data_offset": 0, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": "BaseBdev3", 00:13:28.499 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": "BaseBdev4", 00:13:28.499 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 } 00:13:28.499 ] 00:13:28.499 }' 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.499 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.066 [2024-11-20 03:20:18.540960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.066 [2024-11-20 03:20:18.541000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.066 [2024-11-20 03:20:18.541098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.066 [2024-11-20 03:20:18.541191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.066 [2024-11-20 03:20:18.541210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.066 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.067 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:29.325 /dev/nbd0 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.325 1+0 records in 00:13:29.325 1+0 records out 00:13:29.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374577 s, 10.9 MB/s 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.325 03:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:29.585 /dev/nbd1 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.585 1+0 records in 00:13:29.585 1+0 records out 00:13:29.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311849 s, 13.1 MB/s 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.585 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.843 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.103 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.362 [2024-11-20 03:20:19.960204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:30.362 [2024-11-20 03:20:19.960271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.362 [2024-11-20 03:20:19.960297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:30.362 [2024-11-20 03:20:19.960308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.362 [2024-11-20 03:20:19.962870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.362 [2024-11-20 03:20:19.962915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:30.362 [2024-11-20 03:20:19.963022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:30.362 [2024-11-20 03:20:19.963107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.362 [2024-11-20 03:20:19.963273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.362 [2024-11-20 03:20:19.963386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.362 spare 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.362 03:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.621 [2024-11-20 03:20:20.063303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:30.621 [2024-11-20 03:20:20.063352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.621 [2024-11-20 03:20:20.063761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:30.621 [2024-11-20 03:20:20.064007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:30.621 [2024-11-20 03:20:20.064031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:30.621 [2024-11-20 03:20:20.064259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.621 "name": "raid_bdev1", 00:13:30.621 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:30.621 "strip_size_kb": 0, 00:13:30.621 "state": "online", 00:13:30.621 "raid_level": "raid1", 00:13:30.621 "superblock": true, 00:13:30.621 "num_base_bdevs": 4, 00:13:30.621 "num_base_bdevs_discovered": 3, 00:13:30.621 "num_base_bdevs_operational": 3, 00:13:30.621 "base_bdevs_list": [ 00:13:30.621 { 00:13:30.621 "name": "spare", 00:13:30.621 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:30.621 "is_configured": true, 00:13:30.621 "data_offset": 2048, 00:13:30.621 "data_size": 63488 00:13:30.621 }, 00:13:30.621 { 00:13:30.621 "name": null, 00:13:30.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.621 "is_configured": false, 00:13:30.621 "data_offset": 2048, 00:13:30.621 "data_size": 63488 00:13:30.621 }, 00:13:30.621 { 00:13:30.621 "name": "BaseBdev3", 00:13:30.621 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:30.621 "is_configured": true, 00:13:30.621 "data_offset": 2048, 00:13:30.621 "data_size": 63488 00:13:30.621 }, 00:13:30.621 { 00:13:30.621 "name": "BaseBdev4", 00:13:30.621 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:30.621 "is_configured": true, 00:13:30.621 "data_offset": 2048, 00:13:30.621 "data_size": 63488 00:13:30.621 } 00:13:30.621 ] 00:13:30.621 }' 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.621 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.880 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.139 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.139 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.139 "name": "raid_bdev1", 00:13:31.139 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:31.139 "strip_size_kb": 0, 00:13:31.139 "state": "online", 00:13:31.139 "raid_level": "raid1", 00:13:31.139 "superblock": true, 00:13:31.139 "num_base_bdevs": 4, 00:13:31.139 "num_base_bdevs_discovered": 3, 00:13:31.139 "num_base_bdevs_operational": 3, 00:13:31.139 "base_bdevs_list": [ 00:13:31.139 { 00:13:31.139 "name": "spare", 00:13:31.139 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:31.139 "is_configured": true, 00:13:31.139 "data_offset": 2048, 00:13:31.139 "data_size": 63488 00:13:31.139 }, 00:13:31.139 { 00:13:31.139 "name": null, 00:13:31.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.139 "is_configured": false, 00:13:31.140 "data_offset": 2048, 00:13:31.140 "data_size": 63488 00:13:31.140 }, 00:13:31.140 { 00:13:31.140 "name": "BaseBdev3", 00:13:31.140 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:31.140 "is_configured": true, 00:13:31.140 "data_offset": 2048, 00:13:31.140 "data_size": 63488 00:13:31.140 }, 00:13:31.140 { 00:13:31.140 "name": "BaseBdev4", 00:13:31.140 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:31.140 "is_configured": true, 00:13:31.140 "data_offset": 2048, 00:13:31.140 "data_size": 63488 00:13:31.140 } 00:13:31.140 ] 00:13:31.140 }' 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.140 [2024-11-20 03:20:20.667192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.140 "name": "raid_bdev1", 00:13:31.140 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:31.140 "strip_size_kb": 0, 00:13:31.140 "state": "online", 00:13:31.140 "raid_level": "raid1", 00:13:31.140 "superblock": true, 00:13:31.140 "num_base_bdevs": 4, 00:13:31.140 "num_base_bdevs_discovered": 2, 00:13:31.140 "num_base_bdevs_operational": 2, 00:13:31.140 "base_bdevs_list": [ 00:13:31.140 { 00:13:31.140 "name": null, 00:13:31.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.140 "is_configured": false, 00:13:31.140 "data_offset": 0, 00:13:31.140 "data_size": 63488 00:13:31.140 }, 00:13:31.140 { 00:13:31.140 "name": null, 00:13:31.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.140 "is_configured": false, 00:13:31.140 "data_offset": 2048, 00:13:31.140 "data_size": 63488 00:13:31.140 }, 00:13:31.140 { 00:13:31.140 "name": "BaseBdev3", 00:13:31.140 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:31.140 "is_configured": true, 00:13:31.140 "data_offset": 2048, 00:13:31.140 "data_size": 63488 00:13:31.140 }, 00:13:31.140 { 00:13:31.140 "name": "BaseBdev4", 00:13:31.140 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:31.140 "is_configured": true, 00:13:31.140 "data_offset": 2048, 00:13:31.140 "data_size": 63488 00:13:31.140 } 00:13:31.140 ] 00:13:31.140 }' 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.140 03:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.709 03:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.709 03:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.709 03:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.709 [2024-11-20 03:20:21.066576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.709 [2024-11-20 03:20:21.066790] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:31.709 [2024-11-20 03:20:21.066811] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:31.709 [2024-11-20 03:20:21.066850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.709 [2024-11-20 03:20:21.081045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:31.709 03:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.709 03:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:31.709 [2024-11-20 03:20:21.082909] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.647 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.647 "name": "raid_bdev1", 00:13:32.647 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:32.647 "strip_size_kb": 0, 00:13:32.647 "state": "online", 00:13:32.647 "raid_level": "raid1", 00:13:32.647 "superblock": true, 00:13:32.647 "num_base_bdevs": 4, 00:13:32.647 "num_base_bdevs_discovered": 3, 00:13:32.647 "num_base_bdevs_operational": 3, 00:13:32.647 "process": { 00:13:32.647 "type": "rebuild", 00:13:32.647 "target": "spare", 00:13:32.647 "progress": { 00:13:32.647 "blocks": 20480, 00:13:32.647 "percent": 32 00:13:32.647 } 00:13:32.647 }, 00:13:32.647 "base_bdevs_list": [ 00:13:32.647 { 00:13:32.647 "name": "spare", 00:13:32.648 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:32.648 "is_configured": true, 00:13:32.648 "data_offset": 2048, 00:13:32.648 "data_size": 63488 00:13:32.648 }, 00:13:32.648 { 00:13:32.648 "name": null, 00:13:32.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.648 "is_configured": false, 00:13:32.648 "data_offset": 2048, 00:13:32.648 "data_size": 63488 00:13:32.648 }, 00:13:32.648 { 00:13:32.648 "name": "BaseBdev3", 00:13:32.648 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:32.648 "is_configured": true, 00:13:32.648 "data_offset": 2048, 00:13:32.648 "data_size": 63488 00:13:32.648 }, 00:13:32.648 { 00:13:32.648 "name": "BaseBdev4", 00:13:32.648 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:32.648 "is_configured": true, 00:13:32.648 "data_offset": 2048, 00:13:32.648 "data_size": 63488 00:13:32.648 } 00:13:32.648 ] 00:13:32.648 }' 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.648 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.648 [2024-11-20 03:20:22.230424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.907 [2024-11-20 03:20:22.288561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.907 [2024-11-20 03:20:22.288661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.907 [2024-11-20 03:20:22.288684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.907 [2024-11-20 03:20:22.288693] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.907 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.907 "name": "raid_bdev1", 00:13:32.907 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:32.907 "strip_size_kb": 0, 00:13:32.907 "state": "online", 00:13:32.907 "raid_level": "raid1", 00:13:32.907 "superblock": true, 00:13:32.907 "num_base_bdevs": 4, 00:13:32.907 "num_base_bdevs_discovered": 2, 00:13:32.907 "num_base_bdevs_operational": 2, 00:13:32.907 "base_bdevs_list": [ 00:13:32.907 { 00:13:32.907 "name": null, 00:13:32.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.908 "is_configured": false, 00:13:32.908 "data_offset": 0, 00:13:32.908 "data_size": 63488 00:13:32.908 }, 00:13:32.908 { 00:13:32.908 "name": null, 00:13:32.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.908 "is_configured": false, 00:13:32.908 "data_offset": 2048, 00:13:32.908 "data_size": 63488 00:13:32.908 }, 00:13:32.908 { 00:13:32.908 "name": "BaseBdev3", 00:13:32.908 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:32.908 "is_configured": true, 00:13:32.908 "data_offset": 2048, 00:13:32.908 "data_size": 63488 00:13:32.908 }, 00:13:32.908 { 00:13:32.908 "name": "BaseBdev4", 00:13:32.908 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:32.908 "is_configured": true, 00:13:32.908 "data_offset": 2048, 00:13:32.908 "data_size": 63488 00:13:32.908 } 00:13:32.908 ] 00:13:32.908 }' 00:13:32.908 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.908 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.167 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.167 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.167 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.167 [2024-11-20 03:20:22.794637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.167 [2024-11-20 03:20:22.794709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.167 [2024-11-20 03:20:22.794740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:33.167 [2024-11-20 03:20:22.794751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.167 [2024-11-20 03:20:22.795319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.167 [2024-11-20 03:20:22.795350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.167 [2024-11-20 03:20:22.795479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:33.167 [2024-11-20 03:20:22.795512] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:33.167 [2024-11-20 03:20:22.795530] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:33.167 [2024-11-20 03:20:22.795568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.426 [2024-11-20 03:20:22.812540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:33.426 spare 00:13:33.426 03:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.426 03:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:33.426 [2024-11-20 03:20:22.814695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.363 "name": "raid_bdev1", 00:13:34.363 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:34.363 "strip_size_kb": 0, 00:13:34.363 "state": "online", 00:13:34.363 "raid_level": "raid1", 00:13:34.363 "superblock": true, 00:13:34.363 "num_base_bdevs": 4, 00:13:34.363 "num_base_bdevs_discovered": 3, 00:13:34.363 "num_base_bdevs_operational": 3, 00:13:34.363 "process": { 00:13:34.363 "type": "rebuild", 00:13:34.363 "target": "spare", 00:13:34.363 "progress": { 00:13:34.363 "blocks": 20480, 00:13:34.363 "percent": 32 00:13:34.363 } 00:13:34.363 }, 00:13:34.363 "base_bdevs_list": [ 00:13:34.363 { 00:13:34.363 "name": "spare", 00:13:34.363 "uuid": "6146d8b7-c4f3-5f34-b677-152a462facf4", 00:13:34.363 "is_configured": true, 00:13:34.363 "data_offset": 2048, 00:13:34.363 "data_size": 63488 00:13:34.363 }, 00:13:34.363 { 00:13:34.363 "name": null, 00:13:34.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.363 "is_configured": false, 00:13:34.363 "data_offset": 2048, 00:13:34.363 "data_size": 63488 00:13:34.363 }, 00:13:34.363 { 00:13:34.363 "name": "BaseBdev3", 00:13:34.363 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:34.363 "is_configured": true, 00:13:34.363 "data_offset": 2048, 00:13:34.363 "data_size": 63488 00:13:34.363 }, 00:13:34.363 { 00:13:34.363 "name": "BaseBdev4", 00:13:34.363 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:34.363 "is_configured": true, 00:13:34.363 "data_offset": 2048, 00:13:34.363 "data_size": 63488 00:13:34.363 } 00:13:34.363 ] 00:13:34.363 }' 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.363 03:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.363 [2024-11-20 03:20:23.974261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.621 [2024-11-20 03:20:24.020513] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.621 [2024-11-20 03:20:24.020588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.621 [2024-11-20 03:20:24.020605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.621 [2024-11-20 03:20:24.020630] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.621 "name": "raid_bdev1", 00:13:34.621 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:34.621 "strip_size_kb": 0, 00:13:34.621 "state": "online", 00:13:34.621 "raid_level": "raid1", 00:13:34.621 "superblock": true, 00:13:34.621 "num_base_bdevs": 4, 00:13:34.621 "num_base_bdevs_discovered": 2, 00:13:34.621 "num_base_bdevs_operational": 2, 00:13:34.621 "base_bdevs_list": [ 00:13:34.621 { 00:13:34.621 "name": null, 00:13:34.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.621 "is_configured": false, 00:13:34.621 "data_offset": 0, 00:13:34.621 "data_size": 63488 00:13:34.621 }, 00:13:34.621 { 00:13:34.621 "name": null, 00:13:34.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.621 "is_configured": false, 00:13:34.621 "data_offset": 2048, 00:13:34.621 "data_size": 63488 00:13:34.621 }, 00:13:34.621 { 00:13:34.621 "name": "BaseBdev3", 00:13:34.621 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:34.621 "is_configured": true, 00:13:34.621 "data_offset": 2048, 00:13:34.621 "data_size": 63488 00:13:34.621 }, 00:13:34.621 { 00:13:34.621 "name": "BaseBdev4", 00:13:34.621 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:34.621 "is_configured": true, 00:13:34.621 "data_offset": 2048, 00:13:34.621 "data_size": 63488 00:13:34.621 } 00:13:34.621 ] 00:13:34.621 }' 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.621 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.879 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.879 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.879 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.879 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.879 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.138 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.138 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.138 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.138 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.138 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.138 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.138 "name": "raid_bdev1", 00:13:35.138 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:35.138 "strip_size_kb": 0, 00:13:35.138 "state": "online", 00:13:35.138 "raid_level": "raid1", 00:13:35.138 "superblock": true, 00:13:35.138 "num_base_bdevs": 4, 00:13:35.138 "num_base_bdevs_discovered": 2, 00:13:35.138 "num_base_bdevs_operational": 2, 00:13:35.138 "base_bdevs_list": [ 00:13:35.138 { 00:13:35.138 "name": null, 00:13:35.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.138 "is_configured": false, 00:13:35.138 "data_offset": 0, 00:13:35.138 "data_size": 63488 00:13:35.138 }, 00:13:35.138 { 00:13:35.138 "name": null, 00:13:35.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.138 "is_configured": false, 00:13:35.138 "data_offset": 2048, 00:13:35.138 "data_size": 63488 00:13:35.138 }, 00:13:35.138 { 00:13:35.138 "name": "BaseBdev3", 00:13:35.138 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:35.138 "is_configured": true, 00:13:35.138 "data_offset": 2048, 00:13:35.138 "data_size": 63488 00:13:35.138 }, 00:13:35.138 { 00:13:35.138 "name": "BaseBdev4", 00:13:35.138 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:35.138 "is_configured": true, 00:13:35.138 "data_offset": 2048, 00:13:35.139 "data_size": 63488 00:13:35.139 } 00:13:35.139 ] 00:13:35.139 }' 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.139 [2024-11-20 03:20:24.669502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.139 [2024-11-20 03:20:24.669575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.139 [2024-11-20 03:20:24.669598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:35.139 [2024-11-20 03:20:24.669626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.139 [2024-11-20 03:20:24.670131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.139 [2024-11-20 03:20:24.670161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.139 [2024-11-20 03:20:24.670249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:35.139 [2024-11-20 03:20:24.670268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:35.139 [2024-11-20 03:20:24.670278] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:35.139 [2024-11-20 03:20:24.670307] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:35.139 BaseBdev1 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.139 03:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.075 03:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.335 03:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.335 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.335 "name": "raid_bdev1", 00:13:36.335 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:36.335 "strip_size_kb": 0, 00:13:36.335 "state": "online", 00:13:36.335 "raid_level": "raid1", 00:13:36.335 "superblock": true, 00:13:36.335 "num_base_bdevs": 4, 00:13:36.335 "num_base_bdevs_discovered": 2, 00:13:36.335 "num_base_bdevs_operational": 2, 00:13:36.335 "base_bdevs_list": [ 00:13:36.335 { 00:13:36.335 "name": null, 00:13:36.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.335 "is_configured": false, 00:13:36.335 "data_offset": 0, 00:13:36.335 "data_size": 63488 00:13:36.335 }, 00:13:36.335 { 00:13:36.335 "name": null, 00:13:36.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.335 "is_configured": false, 00:13:36.335 "data_offset": 2048, 00:13:36.335 "data_size": 63488 00:13:36.335 }, 00:13:36.335 { 00:13:36.335 "name": "BaseBdev3", 00:13:36.335 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:36.335 "is_configured": true, 00:13:36.335 "data_offset": 2048, 00:13:36.335 "data_size": 63488 00:13:36.335 }, 00:13:36.335 { 00:13:36.335 "name": "BaseBdev4", 00:13:36.335 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:36.335 "is_configured": true, 00:13:36.335 "data_offset": 2048, 00:13:36.335 "data_size": 63488 00:13:36.335 } 00:13:36.335 ] 00:13:36.335 }' 00:13:36.335 03:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.335 03:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.594 "name": "raid_bdev1", 00:13:36.594 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:36.594 "strip_size_kb": 0, 00:13:36.594 "state": "online", 00:13:36.594 "raid_level": "raid1", 00:13:36.594 "superblock": true, 00:13:36.594 "num_base_bdevs": 4, 00:13:36.594 "num_base_bdevs_discovered": 2, 00:13:36.594 "num_base_bdevs_operational": 2, 00:13:36.594 "base_bdevs_list": [ 00:13:36.594 { 00:13:36.594 "name": null, 00:13:36.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.594 "is_configured": false, 00:13:36.594 "data_offset": 0, 00:13:36.594 "data_size": 63488 00:13:36.594 }, 00:13:36.594 { 00:13:36.594 "name": null, 00:13:36.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.594 "is_configured": false, 00:13:36.594 "data_offset": 2048, 00:13:36.594 "data_size": 63488 00:13:36.594 }, 00:13:36.594 { 00:13:36.594 "name": "BaseBdev3", 00:13:36.594 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:36.594 "is_configured": true, 00:13:36.594 "data_offset": 2048, 00:13:36.594 "data_size": 63488 00:13:36.594 }, 00:13:36.594 { 00:13:36.594 "name": "BaseBdev4", 00:13:36.594 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:36.594 "is_configured": true, 00:13:36.594 "data_offset": 2048, 00:13:36.594 "data_size": 63488 00:13:36.594 } 00:13:36.594 ] 00:13:36.594 }' 00:13:36.594 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.854 [2024-11-20 03:20:26.322795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.854 [2024-11-20 03:20:26.323002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:36.854 [2024-11-20 03:20:26.323021] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:36.854 request: 00:13:36.854 { 00:13:36.854 "base_bdev": "BaseBdev1", 00:13:36.854 "raid_bdev": "raid_bdev1", 00:13:36.854 "method": "bdev_raid_add_base_bdev", 00:13:36.854 "req_id": 1 00:13:36.854 } 00:13:36.854 Got JSON-RPC error response 00:13:36.854 response: 00:13:36.854 { 00:13:36.854 "code": -22, 00:13:36.854 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:36.854 } 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:36.854 03:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:37.793 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.794 "name": "raid_bdev1", 00:13:37.794 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:37.794 "strip_size_kb": 0, 00:13:37.794 "state": "online", 00:13:37.794 "raid_level": "raid1", 00:13:37.794 "superblock": true, 00:13:37.794 "num_base_bdevs": 4, 00:13:37.794 "num_base_bdevs_discovered": 2, 00:13:37.794 "num_base_bdevs_operational": 2, 00:13:37.794 "base_bdevs_list": [ 00:13:37.794 { 00:13:37.794 "name": null, 00:13:37.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.794 "is_configured": false, 00:13:37.794 "data_offset": 0, 00:13:37.794 "data_size": 63488 00:13:37.794 }, 00:13:37.794 { 00:13:37.794 "name": null, 00:13:37.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.794 "is_configured": false, 00:13:37.794 "data_offset": 2048, 00:13:37.794 "data_size": 63488 00:13:37.794 }, 00:13:37.794 { 00:13:37.794 "name": "BaseBdev3", 00:13:37.794 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:37.794 "is_configured": true, 00:13:37.794 "data_offset": 2048, 00:13:37.794 "data_size": 63488 00:13:37.794 }, 00:13:37.794 { 00:13:37.794 "name": "BaseBdev4", 00:13:37.794 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:37.794 "is_configured": true, 00:13:37.794 "data_offset": 2048, 00:13:37.794 "data_size": 63488 00:13:37.794 } 00:13:37.794 ] 00:13:37.794 }' 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.794 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.364 "name": "raid_bdev1", 00:13:38.364 "uuid": "391192df-f44a-4a10-9ff6-d046c1021398", 00:13:38.364 "strip_size_kb": 0, 00:13:38.364 "state": "online", 00:13:38.364 "raid_level": "raid1", 00:13:38.364 "superblock": true, 00:13:38.364 "num_base_bdevs": 4, 00:13:38.364 "num_base_bdevs_discovered": 2, 00:13:38.364 "num_base_bdevs_operational": 2, 00:13:38.364 "base_bdevs_list": [ 00:13:38.364 { 00:13:38.364 "name": null, 00:13:38.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.364 "is_configured": false, 00:13:38.364 "data_offset": 0, 00:13:38.364 "data_size": 63488 00:13:38.364 }, 00:13:38.364 { 00:13:38.364 "name": null, 00:13:38.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.364 "is_configured": false, 00:13:38.364 "data_offset": 2048, 00:13:38.364 "data_size": 63488 00:13:38.364 }, 00:13:38.364 { 00:13:38.364 "name": "BaseBdev3", 00:13:38.364 "uuid": "128017b5-7e94-5456-b96e-fbc95f16c52d", 00:13:38.364 "is_configured": true, 00:13:38.364 "data_offset": 2048, 00:13:38.364 "data_size": 63488 00:13:38.364 }, 00:13:38.364 { 00:13:38.364 "name": "BaseBdev4", 00:13:38.364 "uuid": "4024d254-2865-57fa-b627-5fd906da29a4", 00:13:38.364 "is_configured": true, 00:13:38.364 "data_offset": 2048, 00:13:38.364 "data_size": 63488 00:13:38.364 } 00:13:38.364 ] 00:13:38.364 }' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77819 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77819 ']' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77819 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77819 00:13:38.364 killing process with pid 77819 00:13:38.364 Received shutdown signal, test time was about 60.000000 seconds 00:13:38.364 00:13:38.364 Latency(us) 00:13:38.364 [2024-11-20T03:20:27.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.364 [2024-11-20T03:20:27.999Z] =================================================================================================================== 00:13:38.364 [2024-11-20T03:20:27.999Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77819' 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77819 00:13:38.364 [2024-11-20 03:20:27.995264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.364 [2024-11-20 03:20:27.995398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.364 03:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77819 00:13:38.364 [2024-11-20 03:20:27.995470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.364 [2024-11-20 03:20:27.995480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:38.934 [2024-11-20 03:20:28.482396] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.314 03:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:40.314 00:13:40.314 real 0m25.448s 00:13:40.314 user 0m30.998s 00:13:40.314 sys 0m3.872s 00:13:40.314 03:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.314 03:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.314 ************************************ 00:13:40.314 END TEST raid_rebuild_test_sb 00:13:40.314 ************************************ 00:13:40.314 03:20:29 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:40.314 03:20:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:40.315 03:20:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.315 03:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.315 ************************************ 00:13:40.315 START TEST raid_rebuild_test_io 00:13:40.315 ************************************ 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78574 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78574 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78574 ']' 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.315 03:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.315 [2024-11-20 03:20:29.749746] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:40.315 [2024-11-20 03:20:29.749962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78574 ] 00:13:40.315 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.315 Zero copy mechanism will not be used. 00:13:40.315 [2024-11-20 03:20:29.923172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.573 [2024-11-20 03:20:30.038376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.831 [2024-11-20 03:20:30.224888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.831 [2024-11-20 03:20:30.225016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.090 BaseBdev1_malloc 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.090 [2024-11-20 03:20:30.630342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.090 [2024-11-20 03:20:30.630431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.090 [2024-11-20 03:20:30.630457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.090 [2024-11-20 03:20:30.630469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.090 [2024-11-20 03:20:30.632770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.090 [2024-11-20 03:20:30.632810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.090 BaseBdev1 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.090 BaseBdev2_malloc 00:13:41.090 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.091 [2024-11-20 03:20:30.684734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:41.091 [2024-11-20 03:20:30.684798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.091 [2024-11-20 03:20:30.684819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.091 [2024-11-20 03:20:30.684833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.091 [2024-11-20 03:20:30.686983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.091 [2024-11-20 03:20:30.687074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.091 BaseBdev2 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.091 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 BaseBdev3_malloc 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 [2024-11-20 03:20:30.752133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:41.351 [2024-11-20 03:20:30.752198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.351 [2024-11-20 03:20:30.752221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:41.351 [2024-11-20 03:20:30.752231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.351 [2024-11-20 03:20:30.754330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.351 [2024-11-20 03:20:30.754373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:41.351 BaseBdev3 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 BaseBdev4_malloc 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 [2024-11-20 03:20:30.804006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:41.351 [2024-11-20 03:20:30.804066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.351 [2024-11-20 03:20:30.804087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:41.351 [2024-11-20 03:20:30.804098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.351 [2024-11-20 03:20:30.806301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.351 [2024-11-20 03:20:30.806343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:41.351 BaseBdev4 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 spare_malloc 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 spare_delay 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 [2024-11-20 03:20:30.869629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.351 [2024-11-20 03:20:30.869696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.351 [2024-11-20 03:20:30.869721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:41.351 [2024-11-20 03:20:30.869734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.351 [2024-11-20 03:20:30.872146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.351 [2024-11-20 03:20:30.872188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.351 spare 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 [2024-11-20 03:20:30.881633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.351 [2024-11-20 03:20:30.883459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.351 [2024-11-20 03:20:30.883533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.351 [2024-11-20 03:20:30.883592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.351 [2024-11-20 03:20:30.883692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.351 [2024-11-20 03:20:30.883707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:41.351 [2024-11-20 03:20:30.883991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:41.351 [2024-11-20 03:20:30.884192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.351 [2024-11-20 03:20:30.884206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.351 [2024-11-20 03:20:30.884387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.351 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.351 "name": "raid_bdev1", 00:13:41.351 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:41.351 "strip_size_kb": 0, 00:13:41.351 "state": "online", 00:13:41.351 "raid_level": "raid1", 00:13:41.351 "superblock": false, 00:13:41.351 "num_base_bdevs": 4, 00:13:41.351 "num_base_bdevs_discovered": 4, 00:13:41.351 "num_base_bdevs_operational": 4, 00:13:41.351 "base_bdevs_list": [ 00:13:41.351 { 00:13:41.351 "name": "BaseBdev1", 00:13:41.351 "uuid": "65426b52-c691-5419-aee5-4a76540e0110", 00:13:41.351 "is_configured": true, 00:13:41.351 "data_offset": 0, 00:13:41.351 "data_size": 65536 00:13:41.351 }, 00:13:41.351 { 00:13:41.351 "name": "BaseBdev2", 00:13:41.351 "uuid": "f29dc6cd-a2ea-5d25-af7f-5cdc1a6e2bfc", 00:13:41.351 "is_configured": true, 00:13:41.351 "data_offset": 0, 00:13:41.351 "data_size": 65536 00:13:41.351 }, 00:13:41.351 { 00:13:41.351 "name": "BaseBdev3", 00:13:41.351 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:41.351 "is_configured": true, 00:13:41.351 "data_offset": 0, 00:13:41.352 "data_size": 65536 00:13:41.352 }, 00:13:41.352 { 00:13:41.352 "name": "BaseBdev4", 00:13:41.352 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:41.352 "is_configured": true, 00:13:41.352 "data_offset": 0, 00:13:41.352 "data_size": 65536 00:13:41.352 } 00:13:41.352 ] 00:13:41.352 }' 00:13:41.352 03:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.352 03:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.920 [2024-11-20 03:20:31.305241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.920 [2024-11-20 03:20:31.400734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.920 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.921 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.921 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.921 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.921 "name": "raid_bdev1", 00:13:41.921 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:41.921 "strip_size_kb": 0, 00:13:41.921 "state": "online", 00:13:41.921 "raid_level": "raid1", 00:13:41.921 "superblock": false, 00:13:41.921 "num_base_bdevs": 4, 00:13:41.921 "num_base_bdevs_discovered": 3, 00:13:41.921 "num_base_bdevs_operational": 3, 00:13:41.921 "base_bdevs_list": [ 00:13:41.921 { 00:13:41.921 "name": null, 00:13:41.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.921 "is_configured": false, 00:13:41.921 "data_offset": 0, 00:13:41.921 "data_size": 65536 00:13:41.921 }, 00:13:41.921 { 00:13:41.921 "name": "BaseBdev2", 00:13:41.921 "uuid": "f29dc6cd-a2ea-5d25-af7f-5cdc1a6e2bfc", 00:13:41.921 "is_configured": true, 00:13:41.921 "data_offset": 0, 00:13:41.921 "data_size": 65536 00:13:41.921 }, 00:13:41.921 { 00:13:41.921 "name": "BaseBdev3", 00:13:41.921 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:41.921 "is_configured": true, 00:13:41.921 "data_offset": 0, 00:13:41.921 "data_size": 65536 00:13:41.921 }, 00:13:41.921 { 00:13:41.921 "name": "BaseBdev4", 00:13:41.921 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:41.921 "is_configured": true, 00:13:41.921 "data_offset": 0, 00:13:41.921 "data_size": 65536 00:13:41.921 } 00:13:41.921 ] 00:13:41.921 }' 00:13:41.921 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.921 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.921 [2024-11-20 03:20:31.504956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:41.921 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.921 Zero copy mechanism will not be used. 00:13:41.921 Running I/O for 60 seconds... 00:13:42.489 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.489 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.489 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.489 [2024-11-20 03:20:31.858362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.490 03:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.490 03:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.490 [2024-11-20 03:20:31.924515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:42.490 [2024-11-20 03:20:31.926663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.490 [2024-11-20 03:20:32.057317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.490 [2024-11-20 03:20:32.058849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.749 [2024-11-20 03:20:32.296016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.749 [2024-11-20 03:20:32.296904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.007 172.00 IOPS, 516.00 MiB/s [2024-11-20T03:20:32.642Z] [2024-11-20 03:20:32.638643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.267 [2024-11-20 03:20:32.865443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.267 [2024-11-20 03:20:32.865818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.527 "name": "raid_bdev1", 00:13:43.527 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:43.527 "strip_size_kb": 0, 00:13:43.527 "state": "online", 00:13:43.527 "raid_level": "raid1", 00:13:43.527 "superblock": false, 00:13:43.527 "num_base_bdevs": 4, 00:13:43.527 "num_base_bdevs_discovered": 4, 00:13:43.527 "num_base_bdevs_operational": 4, 00:13:43.527 "process": { 00:13:43.527 "type": "rebuild", 00:13:43.527 "target": "spare", 00:13:43.527 "progress": { 00:13:43.527 "blocks": 10240, 00:13:43.527 "percent": 15 00:13:43.527 } 00:13:43.527 }, 00:13:43.527 "base_bdevs_list": [ 00:13:43.527 { 00:13:43.527 "name": "spare", 00:13:43.527 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:43.527 "is_configured": true, 00:13:43.527 "data_offset": 0, 00:13:43.527 "data_size": 65536 00:13:43.527 }, 00:13:43.527 { 00:13:43.527 "name": "BaseBdev2", 00:13:43.527 "uuid": "f29dc6cd-a2ea-5d25-af7f-5cdc1a6e2bfc", 00:13:43.527 "is_configured": true, 00:13:43.527 "data_offset": 0, 00:13:43.527 "data_size": 65536 00:13:43.527 }, 00:13:43.527 { 00:13:43.527 "name": "BaseBdev3", 00:13:43.527 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:43.527 "is_configured": true, 00:13:43.527 "data_offset": 0, 00:13:43.527 "data_size": 65536 00:13:43.527 }, 00:13:43.527 { 00:13:43.527 "name": "BaseBdev4", 00:13:43.527 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:43.527 "is_configured": true, 00:13:43.527 "data_offset": 0, 00:13:43.527 "data_size": 65536 00:13:43.527 } 00:13:43.527 ] 00:13:43.527 }' 00:13:43.527 03:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.527 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.527 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.527 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.527 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.527 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.527 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.527 [2024-11-20 03:20:33.039116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.787 [2024-11-20 03:20:33.192956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.787 [2024-11-20 03:20:33.198251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.787 [2024-11-20 03:20:33.198397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.787 [2024-11-20 03:20:33.198438] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.787 [2024-11-20 03:20:33.216435] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.787 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.787 "name": "raid_bdev1", 00:13:43.787 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:43.787 "strip_size_kb": 0, 00:13:43.787 "state": "online", 00:13:43.787 "raid_level": "raid1", 00:13:43.787 "superblock": false, 00:13:43.787 "num_base_bdevs": 4, 00:13:43.787 "num_base_bdevs_discovered": 3, 00:13:43.787 "num_base_bdevs_operational": 3, 00:13:43.787 "base_bdevs_list": [ 00:13:43.787 { 00:13:43.787 "name": null, 00:13:43.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.787 "is_configured": false, 00:13:43.787 "data_offset": 0, 00:13:43.787 "data_size": 65536 00:13:43.787 }, 00:13:43.787 { 00:13:43.787 "name": "BaseBdev2", 00:13:43.787 "uuid": "f29dc6cd-a2ea-5d25-af7f-5cdc1a6e2bfc", 00:13:43.787 "is_configured": true, 00:13:43.787 "data_offset": 0, 00:13:43.787 "data_size": 65536 00:13:43.787 }, 00:13:43.787 { 00:13:43.787 "name": "BaseBdev3", 00:13:43.787 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:43.788 "is_configured": true, 00:13:43.788 "data_offset": 0, 00:13:43.788 "data_size": 65536 00:13:43.788 }, 00:13:43.788 { 00:13:43.788 "name": "BaseBdev4", 00:13:43.788 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:43.788 "is_configured": true, 00:13:43.788 "data_offset": 0, 00:13:43.788 "data_size": 65536 00:13:43.788 } 00:13:43.788 ] 00:13:43.788 }' 00:13:43.788 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.788 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.307 142.50 IOPS, 427.50 MiB/s [2024-11-20T03:20:33.942Z] 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.307 "name": "raid_bdev1", 00:13:44.307 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:44.307 "strip_size_kb": 0, 00:13:44.307 "state": "online", 00:13:44.307 "raid_level": "raid1", 00:13:44.307 "superblock": false, 00:13:44.307 "num_base_bdevs": 4, 00:13:44.307 "num_base_bdevs_discovered": 3, 00:13:44.307 "num_base_bdevs_operational": 3, 00:13:44.307 "base_bdevs_list": [ 00:13:44.307 { 00:13:44.307 "name": null, 00:13:44.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.307 "is_configured": false, 00:13:44.307 "data_offset": 0, 00:13:44.307 "data_size": 65536 00:13:44.307 }, 00:13:44.307 { 00:13:44.307 "name": "BaseBdev2", 00:13:44.307 "uuid": "f29dc6cd-a2ea-5d25-af7f-5cdc1a6e2bfc", 00:13:44.307 "is_configured": true, 00:13:44.307 "data_offset": 0, 00:13:44.307 "data_size": 65536 00:13:44.307 }, 00:13:44.307 { 00:13:44.307 "name": "BaseBdev3", 00:13:44.307 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:44.307 "is_configured": true, 00:13:44.307 "data_offset": 0, 00:13:44.307 "data_size": 65536 00:13:44.307 }, 00:13:44.307 { 00:13:44.307 "name": "BaseBdev4", 00:13:44.307 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:44.307 "is_configured": true, 00:13:44.307 "data_offset": 0, 00:13:44.307 "data_size": 65536 00:13:44.307 } 00:13:44.307 ] 00:13:44.307 }' 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.307 [2024-11-20 03:20:33.831473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.307 03:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:44.307 [2024-11-20 03:20:33.874347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:44.307 [2024-11-20 03:20:33.876499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.567 [2024-11-20 03:20:33.995301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.567 [2024-11-20 03:20:33.995910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.826 [2024-11-20 03:20:34.220080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.827 [2024-11-20 03:20:34.220400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:45.086 135.67 IOPS, 407.00 MiB/s [2024-11-20T03:20:34.721Z] [2024-11-20 03:20:34.554771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.086 [2024-11-20 03:20:34.555330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.086 [2024-11-20 03:20:34.666250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.086 [2024-11-20 03:20:34.667057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.347 "name": "raid_bdev1", 00:13:45.347 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:45.347 "strip_size_kb": 0, 00:13:45.347 "state": "online", 00:13:45.347 "raid_level": "raid1", 00:13:45.347 "superblock": false, 00:13:45.347 "num_base_bdevs": 4, 00:13:45.347 "num_base_bdevs_discovered": 4, 00:13:45.347 "num_base_bdevs_operational": 4, 00:13:45.347 "process": { 00:13:45.347 "type": "rebuild", 00:13:45.347 "target": "spare", 00:13:45.347 "progress": { 00:13:45.347 "blocks": 12288, 00:13:45.347 "percent": 18 00:13:45.347 } 00:13:45.347 }, 00:13:45.347 "base_bdevs_list": [ 00:13:45.347 { 00:13:45.347 "name": "spare", 00:13:45.347 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:45.347 "is_configured": true, 00:13:45.347 "data_offset": 0, 00:13:45.347 "data_size": 65536 00:13:45.347 }, 00:13:45.347 { 00:13:45.347 "name": "BaseBdev2", 00:13:45.347 "uuid": "f29dc6cd-a2ea-5d25-af7f-5cdc1a6e2bfc", 00:13:45.347 "is_configured": true, 00:13:45.347 "data_offset": 0, 00:13:45.347 "data_size": 65536 00:13:45.347 }, 00:13:45.347 { 00:13:45.347 "name": "BaseBdev3", 00:13:45.347 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:45.347 "is_configured": true, 00:13:45.347 "data_offset": 0, 00:13:45.347 "data_size": 65536 00:13:45.347 }, 00:13:45.347 { 00:13:45.347 "name": "BaseBdev4", 00:13:45.347 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:45.347 "is_configured": true, 00:13:45.347 "data_offset": 0, 00:13:45.347 "data_size": 65536 00:13:45.347 } 00:13:45.347 ] 00:13:45.347 }' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.347 03:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.607 [2024-11-20 03:20:34.981852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.607 [2024-11-20 03:20:34.994040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:45.607 [2024-11-20 03:20:34.995571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:45.607 [2024-11-20 03:20:35.103804] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:45.607 [2024-11-20 03:20:35.103918] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.607 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.607 "name": "raid_bdev1", 00:13:45.607 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:45.607 "strip_size_kb": 0, 00:13:45.607 "state": "online", 00:13:45.607 "raid_level": "raid1", 00:13:45.607 "superblock": false, 00:13:45.607 "num_base_bdevs": 4, 00:13:45.607 "num_base_bdevs_discovered": 3, 00:13:45.607 "num_base_bdevs_operational": 3, 00:13:45.607 "process": { 00:13:45.607 "type": "rebuild", 00:13:45.607 "target": "spare", 00:13:45.607 "progress": { 00:13:45.607 "blocks": 14336, 00:13:45.607 "percent": 21 00:13:45.607 } 00:13:45.607 }, 00:13:45.607 "base_bdevs_list": [ 00:13:45.607 { 00:13:45.607 "name": "spare", 00:13:45.607 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:45.607 "is_configured": true, 00:13:45.607 "data_offset": 0, 00:13:45.607 "data_size": 65536 00:13:45.607 }, 00:13:45.607 { 00:13:45.607 "name": null, 00:13:45.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.607 "is_configured": false, 00:13:45.607 "data_offset": 0, 00:13:45.607 "data_size": 65536 00:13:45.607 }, 00:13:45.607 { 00:13:45.607 "name": "BaseBdev3", 00:13:45.607 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:45.607 "is_configured": true, 00:13:45.607 "data_offset": 0, 00:13:45.607 "data_size": 65536 00:13:45.607 }, 00:13:45.607 { 00:13:45.607 "name": "BaseBdev4", 00:13:45.607 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:45.607 "is_configured": true, 00:13:45.607 "data_offset": 0, 00:13:45.607 "data_size": 65536 00:13:45.608 } 00:13:45.608 ] 00:13:45.608 }' 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.608 [2024-11-20 03:20:35.215978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.608 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.868 "name": "raid_bdev1", 00:13:45.868 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:45.868 "strip_size_kb": 0, 00:13:45.868 "state": "online", 00:13:45.868 "raid_level": "raid1", 00:13:45.868 "superblock": false, 00:13:45.868 "num_base_bdevs": 4, 00:13:45.868 "num_base_bdevs_discovered": 3, 00:13:45.868 "num_base_bdevs_operational": 3, 00:13:45.868 "process": { 00:13:45.868 "type": "rebuild", 00:13:45.868 "target": "spare", 00:13:45.868 "progress": { 00:13:45.868 "blocks": 16384, 00:13:45.868 "percent": 25 00:13:45.868 } 00:13:45.868 }, 00:13:45.868 "base_bdevs_list": [ 00:13:45.868 { 00:13:45.868 "name": "spare", 00:13:45.868 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:45.868 "is_configured": true, 00:13:45.868 "data_offset": 0, 00:13:45.868 "data_size": 65536 00:13:45.868 }, 00:13:45.868 { 00:13:45.868 "name": null, 00:13:45.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.868 "is_configured": false, 00:13:45.868 "data_offset": 0, 00:13:45.868 "data_size": 65536 00:13:45.868 }, 00:13:45.868 { 00:13:45.868 "name": "BaseBdev3", 00:13:45.868 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:45.868 "is_configured": true, 00:13:45.868 "data_offset": 0, 00:13:45.868 "data_size": 65536 00:13:45.868 }, 00:13:45.868 { 00:13:45.868 "name": "BaseBdev4", 00:13:45.868 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:45.868 "is_configured": true, 00:13:45.868 "data_offset": 0, 00:13:45.868 "data_size": 65536 00:13:45.868 } 00:13:45.868 ] 00:13:45.868 }' 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.868 03:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.868 [2024-11-20 03:20:35.434966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:46.128 117.75 IOPS, 353.25 MiB/s [2024-11-20T03:20:35.763Z] [2024-11-20 03:20:35.551067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:46.696 [2024-11-20 03:20:36.135862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.956 [2024-11-20 03:20:36.356643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.956 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.956 "name": "raid_bdev1", 00:13:46.956 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:46.956 "strip_size_kb": 0, 00:13:46.956 "state": "online", 00:13:46.956 "raid_level": "raid1", 00:13:46.956 "superblock": false, 00:13:46.956 "num_base_bdevs": 4, 00:13:46.956 "num_base_bdevs_discovered": 3, 00:13:46.956 "num_base_bdevs_operational": 3, 00:13:46.956 "process": { 00:13:46.956 "type": "rebuild", 00:13:46.956 "target": "spare", 00:13:46.956 "progress": { 00:13:46.956 "blocks": 34816, 00:13:46.956 "percent": 53 00:13:46.956 } 00:13:46.956 }, 00:13:46.956 "base_bdevs_list": [ 00:13:46.956 { 00:13:46.956 "name": "spare", 00:13:46.956 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:46.956 "is_configured": true, 00:13:46.956 "data_offset": 0, 00:13:46.956 "data_size": 65536 00:13:46.956 }, 00:13:46.956 { 00:13:46.956 "name": null, 00:13:46.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.956 "is_configured": false, 00:13:46.956 "data_offset": 0, 00:13:46.956 "data_size": 65536 00:13:46.956 }, 00:13:46.956 { 00:13:46.956 "name": "BaseBdev3", 00:13:46.956 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:46.956 "is_configured": true, 00:13:46.956 "data_offset": 0, 00:13:46.956 "data_size": 65536 00:13:46.956 }, 00:13:46.956 { 00:13:46.956 "name": "BaseBdev4", 00:13:46.956 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:46.956 "is_configured": true, 00:13:46.956 "data_offset": 0, 00:13:46.957 "data_size": 65536 00:13:46.957 } 00:13:46.957 ] 00:13:46.957 }' 00:13:46.957 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.957 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.957 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.957 104.60 IOPS, 313.80 MiB/s [2024-11-20T03:20:36.592Z] 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.957 03:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.216 [2024-11-20 03:20:36.700942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:47.476 [2024-11-20 03:20:36.910350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.476 [2024-11-20 03:20:36.911302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.736 [2024-11-20 03:20:37.147496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:47.996 [2024-11-20 03:20:37.371338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:47.996 94.17 IOPS, 282.50 MiB/s [2024-11-20T03:20:37.631Z] 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.996 "name": "raid_bdev1", 00:13:47.996 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:47.996 "strip_size_kb": 0, 00:13:47.996 "state": "online", 00:13:47.996 "raid_level": "raid1", 00:13:47.996 "superblock": false, 00:13:47.996 "num_base_bdevs": 4, 00:13:47.996 "num_base_bdevs_discovered": 3, 00:13:47.996 "num_base_bdevs_operational": 3, 00:13:47.996 "process": { 00:13:47.996 "type": "rebuild", 00:13:47.996 "target": "spare", 00:13:47.996 "progress": { 00:13:47.996 "blocks": 51200, 00:13:47.996 "percent": 78 00:13:47.996 } 00:13:47.996 }, 00:13:47.996 "base_bdevs_list": [ 00:13:47.996 { 00:13:47.996 "name": "spare", 00:13:47.996 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:47.996 "is_configured": true, 00:13:47.996 "data_offset": 0, 00:13:47.996 "data_size": 65536 00:13:47.996 }, 00:13:47.996 { 00:13:47.996 "name": null, 00:13:47.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.996 "is_configured": false, 00:13:47.996 "data_offset": 0, 00:13:47.996 "data_size": 65536 00:13:47.996 }, 00:13:47.996 { 00:13:47.996 "name": "BaseBdev3", 00:13:47.996 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:47.996 "is_configured": true, 00:13:47.996 "data_offset": 0, 00:13:47.996 "data_size": 65536 00:13:47.996 }, 00:13:47.996 { 00:13:47.996 "name": "BaseBdev4", 00:13:47.996 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:47.996 "is_configured": true, 00:13:47.996 "data_offset": 0, 00:13:47.996 "data_size": 65536 00:13:47.996 } 00:13:47.996 ] 00:13:47.996 }' 00:13:47.996 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.996 [2024-11-20 03:20:37.592572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:48.256 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.256 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.256 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.256 03:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.521 [2024-11-20 03:20:37.922323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:49.103 [2024-11-20 03:20:38.466472] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:49.104 85.00 IOPS, 255.00 MiB/s [2024-11-20T03:20:38.739Z] [2024-11-20 03:20:38.571549] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:49.104 [2024-11-20 03:20:38.574065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.104 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.364 "name": "raid_bdev1", 00:13:49.364 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:49.364 "strip_size_kb": 0, 00:13:49.364 "state": "online", 00:13:49.364 "raid_level": "raid1", 00:13:49.364 "superblock": false, 00:13:49.364 "num_base_bdevs": 4, 00:13:49.364 "num_base_bdevs_discovered": 3, 00:13:49.364 "num_base_bdevs_operational": 3, 00:13:49.364 "base_bdevs_list": [ 00:13:49.364 { 00:13:49.364 "name": "spare", 00:13:49.364 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:49.364 "is_configured": true, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 }, 00:13:49.364 { 00:13:49.364 "name": null, 00:13:49.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.364 "is_configured": false, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 }, 00:13:49.364 { 00:13:49.364 "name": "BaseBdev3", 00:13:49.364 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:49.364 "is_configured": true, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 }, 00:13:49.364 { 00:13:49.364 "name": "BaseBdev4", 00:13:49.364 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:49.364 "is_configured": true, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 } 00:13:49.364 ] 00:13:49.364 }' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.364 "name": "raid_bdev1", 00:13:49.364 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:49.364 "strip_size_kb": 0, 00:13:49.364 "state": "online", 00:13:49.364 "raid_level": "raid1", 00:13:49.364 "superblock": false, 00:13:49.364 "num_base_bdevs": 4, 00:13:49.364 "num_base_bdevs_discovered": 3, 00:13:49.364 "num_base_bdevs_operational": 3, 00:13:49.364 "base_bdevs_list": [ 00:13:49.364 { 00:13:49.364 "name": "spare", 00:13:49.364 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:49.364 "is_configured": true, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 }, 00:13:49.364 { 00:13:49.364 "name": null, 00:13:49.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.364 "is_configured": false, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 }, 00:13:49.364 { 00:13:49.364 "name": "BaseBdev3", 00:13:49.364 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:49.364 "is_configured": true, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 }, 00:13:49.364 { 00:13:49.364 "name": "BaseBdev4", 00:13:49.364 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:49.364 "is_configured": true, 00:13:49.364 "data_offset": 0, 00:13:49.364 "data_size": 65536 00:13:49.364 } 00:13:49.364 ] 00:13:49.364 }' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.364 03:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.624 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.624 "name": "raid_bdev1", 00:13:49.624 "uuid": "707b63f8-6e03-47a0-9427-085b438a536c", 00:13:49.624 "strip_size_kb": 0, 00:13:49.624 "state": "online", 00:13:49.624 "raid_level": "raid1", 00:13:49.624 "superblock": false, 00:13:49.624 "num_base_bdevs": 4, 00:13:49.624 "num_base_bdevs_discovered": 3, 00:13:49.624 "num_base_bdevs_operational": 3, 00:13:49.624 "base_bdevs_list": [ 00:13:49.624 { 00:13:49.624 "name": "spare", 00:13:49.624 "uuid": "f1dfe8a8-1ae3-53f3-8095-57b6b0eb9619", 00:13:49.624 "is_configured": true, 00:13:49.624 "data_offset": 0, 00:13:49.624 "data_size": 65536 00:13:49.624 }, 00:13:49.624 { 00:13:49.624 "name": null, 00:13:49.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.624 "is_configured": false, 00:13:49.624 "data_offset": 0, 00:13:49.624 "data_size": 65536 00:13:49.624 }, 00:13:49.624 { 00:13:49.624 "name": "BaseBdev3", 00:13:49.624 "uuid": "9b92b6f1-4c55-5b9f-b6fe-61486d742228", 00:13:49.624 "is_configured": true, 00:13:49.624 "data_offset": 0, 00:13:49.624 "data_size": 65536 00:13:49.624 }, 00:13:49.624 { 00:13:49.624 "name": "BaseBdev4", 00:13:49.624 "uuid": "e338ac03-af1c-57bf-bae5-6bb44a976ef4", 00:13:49.624 "is_configured": true, 00:13:49.624 "data_offset": 0, 00:13:49.624 "data_size": 65536 00:13:49.624 } 00:13:49.624 ] 00:13:49.624 }' 00:13:49.624 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.624 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.884 [2024-11-20 03:20:39.347142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.884 [2024-11-20 03:20:39.347175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.884 00:13:49.884 Latency(us) 00:13:49.884 [2024-11-20T03:20:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.884 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:49.884 raid_bdev1 : 7.95 79.65 238.96 0.00 0.00 17344.58 341.63 112183.90 00:13:49.884 [2024-11-20T03:20:39.519Z] =================================================================================================================== 00:13:49.884 [2024-11-20T03:20:39.519Z] Total : 79.65 238.96 0.00 0.00 17344.58 341.63 112183.90 00:13:49.884 [2024-11-20 03:20:39.460094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.884 [2024-11-20 03:20:39.460137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.884 [2024-11-20 03:20:39.460234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.884 [2024-11-20 03:20:39.460247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:49.884 { 00:13:49.884 "results": [ 00:13:49.884 { 00:13:49.884 "job": "raid_bdev1", 00:13:49.884 "core_mask": "0x1", 00:13:49.884 "workload": "randrw", 00:13:49.884 "percentage": 50, 00:13:49.884 "status": "finished", 00:13:49.884 "queue_depth": 2, 00:13:49.884 "io_size": 3145728, 00:13:49.884 "runtime": 7.94688, 00:13:49.884 "iops": 79.65390190867359, 00:13:49.884 "mibps": 238.96170572602077, 00:13:49.884 "io_failed": 0, 00:13:49.884 "io_timeout": 0, 00:13:49.884 "avg_latency_us": 17344.57793690543, 00:13:49.884 "min_latency_us": 341.63144104803496, 00:13:49.884 "max_latency_us": 112183.89519650655 00:13:49.884 } 00:13:49.884 ], 00:13:49.884 "core_count": 1 00:13:49.884 } 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.884 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:50.144 /dev/nbd0 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.144 1+0 records in 00:13:50.144 1+0 records out 00:13:50.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385713 s, 10.6 MB/s 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.144 03:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.145 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:50.405 /dev/nbd1 00:13:50.405 03:20:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.405 1+0 records in 00:13:50.405 1+0 records out 00:13:50.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354774 s, 11.5 MB/s 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.405 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.665 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.925 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:51.186 /dev/nbd1 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.186 1+0 records in 00:13:51.186 1+0 records out 00:13:51.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363387 s, 11.3 MB/s 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.186 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.446 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:51.447 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.447 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.447 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.447 03:20:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78574 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78574 ']' 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78574 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78574 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.707 killing process with pid 78574 00:13:51.707 Received shutdown signal, test time was about 9.765298 seconds 00:13:51.707 00:13:51.707 Latency(us) 00:13:51.707 [2024-11-20T03:20:41.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.707 [2024-11-20T03:20:41.342Z] =================================================================================================================== 00:13:51.707 [2024-11-20T03:20:41.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78574' 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78574 00:13:51.707 03:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78574 00:13:51.707 [2024-11-20 03:20:41.253834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.277 [2024-11-20 03:20:41.663463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.218 ************************************ 00:13:53.218 END TEST raid_rebuild_test_io 00:13:53.218 ************************************ 00:13:53.218 03:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:53.218 00:13:53.218 real 0m13.155s 00:13:53.218 user 0m16.569s 00:13:53.218 sys 0m1.753s 00:13:53.218 03:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.218 03:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.478 03:20:42 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:53.478 03:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:53.478 03:20:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.478 03:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.478 ************************************ 00:13:53.478 START TEST raid_rebuild_test_sb_io 00:13:53.478 ************************************ 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78977 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78977 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78977 ']' 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.478 03:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.478 [2024-11-20 03:20:42.976264] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:13:53.478 [2024-11-20 03:20:42.976445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.478 Zero copy mechanism will not be used. 00:13:53.478 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78977 ] 00:13:53.738 [2024-11-20 03:20:43.150808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.738 [2024-11-20 03:20:43.261398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.998 [2024-11-20 03:20:43.459404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.998 [2024-11-20 03:20:43.459499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.258 BaseBdev1_malloc 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.258 [2024-11-20 03:20:43.855380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:54.258 [2024-11-20 03:20:43.855521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.258 [2024-11-20 03:20:43.855569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:54.258 [2024-11-20 03:20:43.855619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.258 [2024-11-20 03:20:43.857749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.258 [2024-11-20 03:20:43.857838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.258 BaseBdev1 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.258 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.519 BaseBdev2_malloc 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.519 [2024-11-20 03:20:43.909004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:54.519 [2024-11-20 03:20:43.909064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.519 [2024-11-20 03:20:43.909082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:54.519 [2024-11-20 03:20:43.909094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.519 [2024-11-20 03:20:43.911119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.519 [2024-11-20 03:20:43.911157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.519 BaseBdev2 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.519 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 BaseBdev3_malloc 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 [2024-11-20 03:20:43.976411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:54.520 [2024-11-20 03:20:43.976466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.520 [2024-11-20 03:20:43.976486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:54.520 [2024-11-20 03:20:43.976497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.520 [2024-11-20 03:20:43.978553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.520 [2024-11-20 03:20:43.978644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:54.520 BaseBdev3 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 BaseBdev4_malloc 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 [2024-11-20 03:20:44.033499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:54.520 [2024-11-20 03:20:44.033594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.520 [2024-11-20 03:20:44.033629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:54.520 [2024-11-20 03:20:44.033657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.520 [2024-11-20 03:20:44.035777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.520 [2024-11-20 03:20:44.035827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:54.520 BaseBdev4 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 spare_malloc 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 spare_delay 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 [2024-11-20 03:20:44.092224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.520 [2024-11-20 03:20:44.092281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.520 [2024-11-20 03:20:44.092300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:54.520 [2024-11-20 03:20:44.092309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.520 [2024-11-20 03:20:44.094337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.520 [2024-11-20 03:20:44.094377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.520 spare 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 [2024-11-20 03:20:44.100253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.520 [2024-11-20 03:20:44.102077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.520 [2024-11-20 03:20:44.102142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.520 [2024-11-20 03:20:44.102193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.520 [2024-11-20 03:20:44.102368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.520 [2024-11-20 03:20:44.102388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.520 [2024-11-20 03:20:44.102663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.520 [2024-11-20 03:20:44.102854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.520 [2024-11-20 03:20:44.102864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.520 [2024-11-20 03:20:44.103006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.520 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.781 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.781 "name": "raid_bdev1", 00:13:54.781 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:54.781 "strip_size_kb": 0, 00:13:54.781 "state": "online", 00:13:54.781 "raid_level": "raid1", 00:13:54.781 "superblock": true, 00:13:54.781 "num_base_bdevs": 4, 00:13:54.781 "num_base_bdevs_discovered": 4, 00:13:54.781 "num_base_bdevs_operational": 4, 00:13:54.781 "base_bdevs_list": [ 00:13:54.781 { 00:13:54.781 "name": "BaseBdev1", 00:13:54.781 "uuid": "911d6c0a-a281-5fe7-add7-bb9b3876453a", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": "BaseBdev2", 00:13:54.781 "uuid": "4ca3b938-eaab-5a1e-b2a5-3b83e5c8f7b4", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": "BaseBdev3", 00:13:54.781 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 }, 00:13:54.781 { 00:13:54.781 "name": "BaseBdev4", 00:13:54.781 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:54.781 "is_configured": true, 00:13:54.781 "data_offset": 2048, 00:13:54.781 "data_size": 63488 00:13:54.781 } 00:13:54.781 ] 00:13:54.781 }' 00:13:54.781 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.781 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.041 [2024-11-20 03:20:44.531928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.041 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.042 [2024-11-20 03:20:44.603413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.042 "name": "raid_bdev1", 00:13:55.042 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:55.042 "strip_size_kb": 0, 00:13:55.042 "state": "online", 00:13:55.042 "raid_level": "raid1", 00:13:55.042 "superblock": true, 00:13:55.042 "num_base_bdevs": 4, 00:13:55.042 "num_base_bdevs_discovered": 3, 00:13:55.042 "num_base_bdevs_operational": 3, 00:13:55.042 "base_bdevs_list": [ 00:13:55.042 { 00:13:55.042 "name": null, 00:13:55.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.042 "is_configured": false, 00:13:55.042 "data_offset": 0, 00:13:55.042 "data_size": 63488 00:13:55.042 }, 00:13:55.042 { 00:13:55.042 "name": "BaseBdev2", 00:13:55.042 "uuid": "4ca3b938-eaab-5a1e-b2a5-3b83e5c8f7b4", 00:13:55.042 "is_configured": true, 00:13:55.042 "data_offset": 2048, 00:13:55.042 "data_size": 63488 00:13:55.042 }, 00:13:55.042 { 00:13:55.042 "name": "BaseBdev3", 00:13:55.042 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:55.042 "is_configured": true, 00:13:55.042 "data_offset": 2048, 00:13:55.042 "data_size": 63488 00:13:55.042 }, 00:13:55.042 { 00:13:55.042 "name": "BaseBdev4", 00:13:55.042 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:55.042 "is_configured": true, 00:13:55.042 "data_offset": 2048, 00:13:55.042 "data_size": 63488 00:13:55.042 } 00:13:55.042 ] 00:13:55.042 }' 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.042 03:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.302 [2024-11-20 03:20:44.687986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:55.302 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.302 Zero copy mechanism will not be used. 00:13:55.302 Running I/O for 60 seconds... 00:13:55.562 03:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.562 03:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.562 03:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.562 [2024-11-20 03:20:45.037068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.562 03:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.562 03:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:55.562 [2024-11-20 03:20:45.091823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:55.562 [2024-11-20 03:20:45.093854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.823 [2024-11-20 03:20:45.215468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.823 [2024-11-20 03:20:45.217004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.823 [2024-11-20 03:20:45.455780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.823 [2024-11-20 03:20:45.456627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.342 132.00 IOPS, 396.00 MiB/s [2024-11-20T03:20:45.977Z] [2024-11-20 03:20:45.952382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.342 [2024-11-20 03:20:45.952816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.602 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.602 "name": "raid_bdev1", 00:13:56.602 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:56.602 "strip_size_kb": 0, 00:13:56.602 "state": "online", 00:13:56.602 "raid_level": "raid1", 00:13:56.603 "superblock": true, 00:13:56.603 "num_base_bdevs": 4, 00:13:56.603 "num_base_bdevs_discovered": 4, 00:13:56.603 "num_base_bdevs_operational": 4, 00:13:56.603 "process": { 00:13:56.603 "type": "rebuild", 00:13:56.603 "target": "spare", 00:13:56.603 "progress": { 00:13:56.603 "blocks": 12288, 00:13:56.603 "percent": 19 00:13:56.603 } 00:13:56.603 }, 00:13:56.603 "base_bdevs_list": [ 00:13:56.603 { 00:13:56.603 "name": "spare", 00:13:56.603 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:13:56.603 "is_configured": true, 00:13:56.603 "data_offset": 2048, 00:13:56.603 "data_size": 63488 00:13:56.603 }, 00:13:56.603 { 00:13:56.603 "name": "BaseBdev2", 00:13:56.603 "uuid": "4ca3b938-eaab-5a1e-b2a5-3b83e5c8f7b4", 00:13:56.603 "is_configured": true, 00:13:56.603 "data_offset": 2048, 00:13:56.603 "data_size": 63488 00:13:56.603 }, 00:13:56.603 { 00:13:56.603 "name": "BaseBdev3", 00:13:56.603 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:56.603 "is_configured": true, 00:13:56.603 "data_offset": 2048, 00:13:56.603 "data_size": 63488 00:13:56.603 }, 00:13:56.603 { 00:13:56.603 "name": "BaseBdev4", 00:13:56.603 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:56.603 "is_configured": true, 00:13:56.603 "data_offset": 2048, 00:13:56.603 "data_size": 63488 00:13:56.603 } 00:13:56.603 ] 00:13:56.603 }' 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.603 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.603 [2024-11-20 03:20:46.223674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.863 [2024-11-20 03:20:46.291202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:56.863 [2024-11-20 03:20:46.328893] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.863 [2024-11-20 03:20:46.332628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.863 [2024-11-20 03:20:46.332725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.863 [2024-11-20 03:20:46.332753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.863 [2024-11-20 03:20:46.362483] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.863 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.864 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.864 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.864 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.864 "name": "raid_bdev1", 00:13:56.864 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:56.864 "strip_size_kb": 0, 00:13:56.864 "state": "online", 00:13:56.864 "raid_level": "raid1", 00:13:56.864 "superblock": true, 00:13:56.864 "num_base_bdevs": 4, 00:13:56.864 "num_base_bdevs_discovered": 3, 00:13:56.864 "num_base_bdevs_operational": 3, 00:13:56.864 "base_bdevs_list": [ 00:13:56.864 { 00:13:56.864 "name": null, 00:13:56.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.864 "is_configured": false, 00:13:56.864 "data_offset": 0, 00:13:56.864 "data_size": 63488 00:13:56.864 }, 00:13:56.864 { 00:13:56.864 "name": "BaseBdev2", 00:13:56.864 "uuid": "4ca3b938-eaab-5a1e-b2a5-3b83e5c8f7b4", 00:13:56.864 "is_configured": true, 00:13:56.864 "data_offset": 2048, 00:13:56.864 "data_size": 63488 00:13:56.864 }, 00:13:56.864 { 00:13:56.864 "name": "BaseBdev3", 00:13:56.864 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:56.864 "is_configured": true, 00:13:56.864 "data_offset": 2048, 00:13:56.864 "data_size": 63488 00:13:56.864 }, 00:13:56.864 { 00:13:56.864 "name": "BaseBdev4", 00:13:56.864 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:56.864 "is_configured": true, 00:13:56.864 "data_offset": 2048, 00:13:56.864 "data_size": 63488 00:13:56.864 } 00:13:56.864 ] 00:13:56.864 }' 00:13:56.864 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.864 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.385 140.00 IOPS, 420.00 MiB/s [2024-11-20T03:20:47.020Z] 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.385 "name": "raid_bdev1", 00:13:57.385 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:57.385 "strip_size_kb": 0, 00:13:57.385 "state": "online", 00:13:57.385 "raid_level": "raid1", 00:13:57.385 "superblock": true, 00:13:57.385 "num_base_bdevs": 4, 00:13:57.385 "num_base_bdevs_discovered": 3, 00:13:57.385 "num_base_bdevs_operational": 3, 00:13:57.385 "base_bdevs_list": [ 00:13:57.385 { 00:13:57.385 "name": null, 00:13:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.385 "is_configured": false, 00:13:57.385 "data_offset": 0, 00:13:57.385 "data_size": 63488 00:13:57.385 }, 00:13:57.385 { 00:13:57.385 "name": "BaseBdev2", 00:13:57.385 "uuid": "4ca3b938-eaab-5a1e-b2a5-3b83e5c8f7b4", 00:13:57.385 "is_configured": true, 00:13:57.385 "data_offset": 2048, 00:13:57.385 "data_size": 63488 00:13:57.385 }, 00:13:57.385 { 00:13:57.385 "name": "BaseBdev3", 00:13:57.385 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:57.385 "is_configured": true, 00:13:57.385 "data_offset": 2048, 00:13:57.385 "data_size": 63488 00:13:57.385 }, 00:13:57.385 { 00:13:57.385 "name": "BaseBdev4", 00:13:57.385 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:57.385 "is_configured": true, 00:13:57.385 "data_offset": 2048, 00:13:57.385 "data_size": 63488 00:13:57.385 } 00:13:57.385 ] 00:13:57.385 }' 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.385 [2024-11-20 03:20:46.930939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.385 03:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:57.385 [2024-11-20 03:20:46.978691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:57.385 [2024-11-20 03:20:46.980651] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.645 [2024-11-20 03:20:47.101326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.645 [2024-11-20 03:20:47.102738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.906 [2024-11-20 03:20:47.362919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.165 141.00 IOPS, 423.00 MiB/s [2024-11-20T03:20:47.800Z] [2024-11-20 03:20:47.707774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.424 [2024-11-20 03:20:47.837502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.424 [2024-11-20 03:20:47.838260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.424 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.425 03:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.425 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.425 "name": "raid_bdev1", 00:13:58.425 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:58.425 "strip_size_kb": 0, 00:13:58.425 "state": "online", 00:13:58.425 "raid_level": "raid1", 00:13:58.425 "superblock": true, 00:13:58.425 "num_base_bdevs": 4, 00:13:58.425 "num_base_bdevs_discovered": 4, 00:13:58.425 "num_base_bdevs_operational": 4, 00:13:58.425 "process": { 00:13:58.425 "type": "rebuild", 00:13:58.425 "target": "spare", 00:13:58.425 "progress": { 00:13:58.425 "blocks": 10240, 00:13:58.425 "percent": 16 00:13:58.425 } 00:13:58.425 }, 00:13:58.425 "base_bdevs_list": [ 00:13:58.425 { 00:13:58.425 "name": "spare", 00:13:58.425 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:13:58.425 "is_configured": true, 00:13:58.425 "data_offset": 2048, 00:13:58.425 "data_size": 63488 00:13:58.425 }, 00:13:58.425 { 00:13:58.425 "name": "BaseBdev2", 00:13:58.425 "uuid": "4ca3b938-eaab-5a1e-b2a5-3b83e5c8f7b4", 00:13:58.425 "is_configured": true, 00:13:58.425 "data_offset": 2048, 00:13:58.425 "data_size": 63488 00:13:58.425 }, 00:13:58.425 { 00:13:58.425 "name": "BaseBdev3", 00:13:58.425 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:58.425 "is_configured": true, 00:13:58.425 "data_offset": 2048, 00:13:58.425 "data_size": 63488 00:13:58.425 }, 00:13:58.425 { 00:13:58.425 "name": "BaseBdev4", 00:13:58.425 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:58.425 "is_configured": true, 00:13:58.425 "data_offset": 2048, 00:13:58.425 "data_size": 63488 00:13:58.425 } 00:13:58.425 ] 00:13:58.425 }' 00:13:58.425 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.425 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:58.685 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.685 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.685 [2024-11-20 03:20:48.111651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.685 [2024-11-20 03:20:48.184531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:58.946 [2024-11-20 03:20:48.392328] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:58.946 [2024-11-20 03:20:48.392372] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.946 "name": "raid_bdev1", 00:13:58.946 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:58.946 "strip_size_kb": 0, 00:13:58.946 "state": "online", 00:13:58.946 "raid_level": "raid1", 00:13:58.946 "superblock": true, 00:13:58.946 "num_base_bdevs": 4, 00:13:58.946 "num_base_bdevs_discovered": 3, 00:13:58.946 "num_base_bdevs_operational": 3, 00:13:58.946 "process": { 00:13:58.946 "type": "rebuild", 00:13:58.946 "target": "spare", 00:13:58.946 "progress": { 00:13:58.946 "blocks": 14336, 00:13:58.946 "percent": 22 00:13:58.946 } 00:13:58.946 }, 00:13:58.946 "base_bdevs_list": [ 00:13:58.946 { 00:13:58.946 "name": "spare", 00:13:58.946 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:13:58.946 "is_configured": true, 00:13:58.946 "data_offset": 2048, 00:13:58.946 "data_size": 63488 00:13:58.946 }, 00:13:58.946 { 00:13:58.946 "name": null, 00:13:58.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.946 "is_configured": false, 00:13:58.946 "data_offset": 0, 00:13:58.946 "data_size": 63488 00:13:58.946 }, 00:13:58.946 { 00:13:58.946 "name": "BaseBdev3", 00:13:58.946 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:58.946 "is_configured": true, 00:13:58.946 "data_offset": 2048, 00:13:58.946 "data_size": 63488 00:13:58.946 }, 00:13:58.946 { 00:13:58.946 "name": "BaseBdev4", 00:13:58.946 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:58.946 "is_configured": true, 00:13:58.946 "data_offset": 2048, 00:13:58.946 "data_size": 63488 00:13:58.946 } 00:13:58.946 ] 00:13:58.946 }' 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.946 [2024-11-20 03:20:48.511825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:58.946 [2024-11-20 03:20:48.512208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=492 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.946 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.206 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.206 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.206 "name": "raid_bdev1", 00:13:59.206 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:13:59.206 "strip_size_kb": 0, 00:13:59.206 "state": "online", 00:13:59.206 "raid_level": "raid1", 00:13:59.206 "superblock": true, 00:13:59.206 "num_base_bdevs": 4, 00:13:59.206 "num_base_bdevs_discovered": 3, 00:13:59.206 "num_base_bdevs_operational": 3, 00:13:59.206 "process": { 00:13:59.206 "type": "rebuild", 00:13:59.206 "target": "spare", 00:13:59.206 "progress": { 00:13:59.206 "blocks": 16384, 00:13:59.206 "percent": 25 00:13:59.206 } 00:13:59.206 }, 00:13:59.206 "base_bdevs_list": [ 00:13:59.206 { 00:13:59.206 "name": "spare", 00:13:59.206 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:13:59.206 "is_configured": true, 00:13:59.206 "data_offset": 2048, 00:13:59.206 "data_size": 63488 00:13:59.206 }, 00:13:59.206 { 00:13:59.206 "name": null, 00:13:59.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.206 "is_configured": false, 00:13:59.206 "data_offset": 0, 00:13:59.206 "data_size": 63488 00:13:59.206 }, 00:13:59.206 { 00:13:59.207 "name": "BaseBdev3", 00:13:59.207 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:13:59.207 "is_configured": true, 00:13:59.207 "data_offset": 2048, 00:13:59.207 "data_size": 63488 00:13:59.207 }, 00:13:59.207 { 00:13:59.207 "name": "BaseBdev4", 00:13:59.207 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:13:59.207 "is_configured": true, 00:13:59.207 "data_offset": 2048, 00:13:59.207 "data_size": 63488 00:13:59.207 } 00:13:59.207 ] 00:13:59.207 }' 00:13:59.207 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.207 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.207 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.207 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.207 03:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.207 131.25 IOPS, 393.75 MiB/s [2024-11-20T03:20:48.842Z] [2024-11-20 03:20:48.740403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:59.466 [2024-11-20 03:20:48.949622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:59.726 [2024-11-20 03:20:49.174759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:59.985 [2024-11-20 03:20:49.405306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.244 116.60 IOPS, 349.80 MiB/s [2024-11-20T03:20:49.879Z] 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.244 "name": "raid_bdev1", 00:14:00.244 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:00.244 "strip_size_kb": 0, 00:14:00.244 "state": "online", 00:14:00.244 "raid_level": "raid1", 00:14:00.244 "superblock": true, 00:14:00.244 "num_base_bdevs": 4, 00:14:00.244 "num_base_bdevs_discovered": 3, 00:14:00.244 "num_base_bdevs_operational": 3, 00:14:00.244 "process": { 00:14:00.244 "type": "rebuild", 00:14:00.244 "target": "spare", 00:14:00.244 "progress": { 00:14:00.244 "blocks": 32768, 00:14:00.244 "percent": 51 00:14:00.244 } 00:14:00.244 }, 00:14:00.244 "base_bdevs_list": [ 00:14:00.244 { 00:14:00.244 "name": "spare", 00:14:00.244 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:00.244 "is_configured": true, 00:14:00.244 "data_offset": 2048, 00:14:00.244 "data_size": 63488 00:14:00.244 }, 00:14:00.244 { 00:14:00.244 "name": null, 00:14:00.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.244 "is_configured": false, 00:14:00.244 "data_offset": 0, 00:14:00.244 "data_size": 63488 00:14:00.244 }, 00:14:00.244 { 00:14:00.244 "name": "BaseBdev3", 00:14:00.244 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:00.244 "is_configured": true, 00:14:00.244 "data_offset": 2048, 00:14:00.244 "data_size": 63488 00:14:00.244 }, 00:14:00.244 { 00:14:00.244 "name": "BaseBdev4", 00:14:00.244 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:00.244 "is_configured": true, 00:14:00.244 "data_offset": 2048, 00:14:00.244 "data_size": 63488 00:14:00.244 } 00:14:00.244 ] 00:14:00.244 }' 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.244 03:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.505 [2024-11-20 03:20:50.081490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:01.074 [2024-11-20 03:20:50.426355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:01.074 [2024-11-20 03:20:50.427490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:01.334 103.50 IOPS, 310.50 MiB/s [2024-11-20T03:20:50.969Z] 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.334 "name": "raid_bdev1", 00:14:01.334 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:01.334 "strip_size_kb": 0, 00:14:01.334 "state": "online", 00:14:01.334 "raid_level": "raid1", 00:14:01.334 "superblock": true, 00:14:01.334 "num_base_bdevs": 4, 00:14:01.334 "num_base_bdevs_discovered": 3, 00:14:01.334 "num_base_bdevs_operational": 3, 00:14:01.334 "process": { 00:14:01.334 "type": "rebuild", 00:14:01.334 "target": "spare", 00:14:01.334 "progress": { 00:14:01.334 "blocks": 49152, 00:14:01.334 "percent": 77 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 "base_bdevs_list": [ 00:14:01.334 { 00:14:01.334 "name": "spare", 00:14:01.334 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:01.334 "is_configured": true, 00:14:01.334 "data_offset": 2048, 00:14:01.334 "data_size": 63488 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "name": null, 00:14:01.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.334 "is_configured": false, 00:14:01.334 "data_offset": 0, 00:14:01.334 "data_size": 63488 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "name": "BaseBdev3", 00:14:01.334 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:01.334 "is_configured": true, 00:14:01.334 "data_offset": 2048, 00:14:01.334 "data_size": 63488 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "name": "BaseBdev4", 00:14:01.334 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:01.334 "is_configured": true, 00:14:01.334 "data_offset": 2048, 00:14:01.334 "data_size": 63488 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }' 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.334 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.594 [2024-11-20 03:20:50.977018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:01.594 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.594 03:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.165 [2024-11-20 03:20:51.516516] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:02.165 [2024-11-20 03:20:51.616314] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:02.165 [2024-11-20 03:20:51.619219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.437 94.14 IOPS, 282.43 MiB/s [2024-11-20T03:20:52.072Z] 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.437 03:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.437 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.437 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.437 "name": "raid_bdev1", 00:14:02.437 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:02.437 "strip_size_kb": 0, 00:14:02.437 "state": "online", 00:14:02.437 "raid_level": "raid1", 00:14:02.437 "superblock": true, 00:14:02.437 "num_base_bdevs": 4, 00:14:02.437 "num_base_bdevs_discovered": 3, 00:14:02.437 "num_base_bdevs_operational": 3, 00:14:02.437 "base_bdevs_list": [ 00:14:02.437 { 00:14:02.437 "name": "spare", 00:14:02.437 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:02.437 "is_configured": true, 00:14:02.437 "data_offset": 2048, 00:14:02.437 "data_size": 63488 00:14:02.437 }, 00:14:02.437 { 00:14:02.437 "name": null, 00:14:02.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.437 "is_configured": false, 00:14:02.437 "data_offset": 0, 00:14:02.437 "data_size": 63488 00:14:02.437 }, 00:14:02.437 { 00:14:02.437 "name": "BaseBdev3", 00:14:02.437 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:02.437 "is_configured": true, 00:14:02.437 "data_offset": 2048, 00:14:02.437 "data_size": 63488 00:14:02.437 }, 00:14:02.437 { 00:14:02.437 "name": "BaseBdev4", 00:14:02.437 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:02.437 "is_configured": true, 00:14:02.437 "data_offset": 2048, 00:14:02.437 "data_size": 63488 00:14:02.437 } 00:14:02.437 ] 00:14:02.437 }' 00:14:02.437 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.718 "name": "raid_bdev1", 00:14:02.718 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:02.718 "strip_size_kb": 0, 00:14:02.718 "state": "online", 00:14:02.718 "raid_level": "raid1", 00:14:02.718 "superblock": true, 00:14:02.718 "num_base_bdevs": 4, 00:14:02.718 "num_base_bdevs_discovered": 3, 00:14:02.718 "num_base_bdevs_operational": 3, 00:14:02.718 "base_bdevs_list": [ 00:14:02.718 { 00:14:02.718 "name": "spare", 00:14:02.718 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:02.718 "is_configured": true, 00:14:02.718 "data_offset": 2048, 00:14:02.718 "data_size": 63488 00:14:02.718 }, 00:14:02.718 { 00:14:02.718 "name": null, 00:14:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.718 "is_configured": false, 00:14:02.718 "data_offset": 0, 00:14:02.718 "data_size": 63488 00:14:02.718 }, 00:14:02.718 { 00:14:02.718 "name": "BaseBdev3", 00:14:02.718 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:02.718 "is_configured": true, 00:14:02.718 "data_offset": 2048, 00:14:02.718 "data_size": 63488 00:14:02.718 }, 00:14:02.718 { 00:14:02.718 "name": "BaseBdev4", 00:14:02.718 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:02.718 "is_configured": true, 00:14:02.718 "data_offset": 2048, 00:14:02.718 "data_size": 63488 00:14:02.718 } 00:14:02.718 ] 00:14:02.718 }' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.718 "name": "raid_bdev1", 00:14:02.718 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:02.718 "strip_size_kb": 0, 00:14:02.718 "state": "online", 00:14:02.718 "raid_level": "raid1", 00:14:02.718 "superblock": true, 00:14:02.718 "num_base_bdevs": 4, 00:14:02.718 "num_base_bdevs_discovered": 3, 00:14:02.718 "num_base_bdevs_operational": 3, 00:14:02.718 "base_bdevs_list": [ 00:14:02.718 { 00:14:02.718 "name": "spare", 00:14:02.718 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:02.718 "is_configured": true, 00:14:02.718 "data_offset": 2048, 00:14:02.718 "data_size": 63488 00:14:02.718 }, 00:14:02.718 { 00:14:02.718 "name": null, 00:14:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.718 "is_configured": false, 00:14:02.718 "data_offset": 0, 00:14:02.718 "data_size": 63488 00:14:02.718 }, 00:14:02.718 { 00:14:02.718 "name": "BaseBdev3", 00:14:02.718 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:02.718 "is_configured": true, 00:14:02.718 "data_offset": 2048, 00:14:02.718 "data_size": 63488 00:14:02.718 }, 00:14:02.718 { 00:14:02.718 "name": "BaseBdev4", 00:14:02.718 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:02.718 "is_configured": true, 00:14:02.718 "data_offset": 2048, 00:14:02.718 "data_size": 63488 00:14:02.718 } 00:14:02.718 ] 00:14:02.718 }' 00:14:02.718 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.719 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.288 86.12 IOPS, 258.38 MiB/s [2024-11-20T03:20:52.923Z] 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.288 [2024-11-20 03:20:52.734396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.288 [2024-11-20 03:20:52.734440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.288 00:14:03.288 Latency(us) 00:14:03.288 [2024-11-20T03:20:52.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.288 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:03.288 raid_bdev1 : 8.13 85.47 256.40 0.00 0.00 16383.67 354.15 112183.90 00:14:03.288 [2024-11-20T03:20:52.923Z] =================================================================================================================== 00:14:03.288 [2024-11-20T03:20:52.923Z] Total : 85.47 256.40 0.00 0.00 16383.67 354.15 112183.90 00:14:03.288 [2024-11-20 03:20:52.827807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.288 [2024-11-20 03:20:52.827904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.288 [2024-11-20 03:20:52.828020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.288 [2024-11-20 03:20:52.828068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:03.288 { 00:14:03.288 "results": [ 00:14:03.288 { 00:14:03.288 "job": "raid_bdev1", 00:14:03.288 "core_mask": "0x1", 00:14:03.288 "workload": "randrw", 00:14:03.288 "percentage": 50, 00:14:03.288 "status": "finished", 00:14:03.288 "queue_depth": 2, 00:14:03.288 "io_size": 3145728, 00:14:03.288 "runtime": 8.131779, 00:14:03.288 "iops": 85.46715300551085, 00:14:03.288 "mibps": 256.40145901653256, 00:14:03.288 "io_failed": 0, 00:14:03.288 "io_timeout": 0, 00:14:03.288 "avg_latency_us": 16383.674440639628, 00:14:03.288 "min_latency_us": 354.15196506550217, 00:14:03.288 "max_latency_us": 112183.89519650655 00:14:03.288 } 00:14:03.288 ], 00:14:03.288 "core_count": 1 00:14:03.288 } 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.288 03:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:03.549 /dev/nbd0 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.549 1+0 records in 00:14:03.549 1+0 records out 00:14:03.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384018 s, 10.7 MB/s 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.549 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:03.809 /dev/nbd1 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.809 1+0 records in 00:14:03.809 1+0 records out 00:14:03.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400652 s, 10.2 MB/s 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:03.809 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.810 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.810 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.070 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.330 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:04.590 /dev/nbd1 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.590 03:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.590 1+0 records in 00:14:04.590 1+0 records out 00:14:04.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238485 s, 17.2 MB/s 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.590 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.850 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.110 [2024-11-20 03:20:54.571444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.110 [2024-11-20 03:20:54.571506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.110 [2024-11-20 03:20:54.571526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:05.110 [2024-11-20 03:20:54.571537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.110 [2024-11-20 03:20:54.573930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.110 [2024-11-20 03:20:54.573971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.110 [2024-11-20 03:20:54.574059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.110 [2024-11-20 03:20:54.574111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.110 [2024-11-20 03:20:54.574232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.110 [2024-11-20 03:20:54.574318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.110 spare 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.110 [2024-11-20 03:20:54.674250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:05.110 [2024-11-20 03:20:54.674296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.110 [2024-11-20 03:20:54.674660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:05.110 [2024-11-20 03:20:54.674904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:05.110 [2024-11-20 03:20:54.674921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:05.110 [2024-11-20 03:20:54.675135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.110 "name": "raid_bdev1", 00:14:05.110 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:05.110 "strip_size_kb": 0, 00:14:05.110 "state": "online", 00:14:05.110 "raid_level": "raid1", 00:14:05.110 "superblock": true, 00:14:05.110 "num_base_bdevs": 4, 00:14:05.110 "num_base_bdevs_discovered": 3, 00:14:05.110 "num_base_bdevs_operational": 3, 00:14:05.110 "base_bdevs_list": [ 00:14:05.110 { 00:14:05.110 "name": "spare", 00:14:05.110 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:05.110 "is_configured": true, 00:14:05.110 "data_offset": 2048, 00:14:05.110 "data_size": 63488 00:14:05.110 }, 00:14:05.110 { 00:14:05.110 "name": null, 00:14:05.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.110 "is_configured": false, 00:14:05.110 "data_offset": 2048, 00:14:05.110 "data_size": 63488 00:14:05.110 }, 00:14:05.110 { 00:14:05.110 "name": "BaseBdev3", 00:14:05.110 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:05.110 "is_configured": true, 00:14:05.110 "data_offset": 2048, 00:14:05.110 "data_size": 63488 00:14:05.110 }, 00:14:05.110 { 00:14:05.110 "name": "BaseBdev4", 00:14:05.110 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:05.110 "is_configured": true, 00:14:05.110 "data_offset": 2048, 00:14:05.110 "data_size": 63488 00:14:05.110 } 00:14:05.110 ] 00:14:05.110 }' 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.110 03:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.680 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.681 "name": "raid_bdev1", 00:14:05.681 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:05.681 "strip_size_kb": 0, 00:14:05.681 "state": "online", 00:14:05.681 "raid_level": "raid1", 00:14:05.681 "superblock": true, 00:14:05.681 "num_base_bdevs": 4, 00:14:05.681 "num_base_bdevs_discovered": 3, 00:14:05.681 "num_base_bdevs_operational": 3, 00:14:05.681 "base_bdevs_list": [ 00:14:05.681 { 00:14:05.681 "name": "spare", 00:14:05.681 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:05.681 "is_configured": true, 00:14:05.681 "data_offset": 2048, 00:14:05.681 "data_size": 63488 00:14:05.681 }, 00:14:05.681 { 00:14:05.681 "name": null, 00:14:05.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.681 "is_configured": false, 00:14:05.681 "data_offset": 2048, 00:14:05.681 "data_size": 63488 00:14:05.681 }, 00:14:05.681 { 00:14:05.681 "name": "BaseBdev3", 00:14:05.681 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:05.681 "is_configured": true, 00:14:05.681 "data_offset": 2048, 00:14:05.681 "data_size": 63488 00:14:05.681 }, 00:14:05.681 { 00:14:05.681 "name": "BaseBdev4", 00:14:05.681 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:05.681 "is_configured": true, 00:14:05.681 "data_offset": 2048, 00:14:05.681 "data_size": 63488 00:14:05.681 } 00:14:05.681 ] 00:14:05.681 }' 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.681 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.681 [2024-11-20 03:20:55.310370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.941 "name": "raid_bdev1", 00:14:05.941 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:05.941 "strip_size_kb": 0, 00:14:05.941 "state": "online", 00:14:05.941 "raid_level": "raid1", 00:14:05.941 "superblock": true, 00:14:05.941 "num_base_bdevs": 4, 00:14:05.941 "num_base_bdevs_discovered": 2, 00:14:05.941 "num_base_bdevs_operational": 2, 00:14:05.941 "base_bdevs_list": [ 00:14:05.941 { 00:14:05.941 "name": null, 00:14:05.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.941 "is_configured": false, 00:14:05.941 "data_offset": 0, 00:14:05.941 "data_size": 63488 00:14:05.941 }, 00:14:05.941 { 00:14:05.941 "name": null, 00:14:05.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.941 "is_configured": false, 00:14:05.941 "data_offset": 2048, 00:14:05.941 "data_size": 63488 00:14:05.941 }, 00:14:05.941 { 00:14:05.941 "name": "BaseBdev3", 00:14:05.941 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:05.941 "is_configured": true, 00:14:05.941 "data_offset": 2048, 00:14:05.941 "data_size": 63488 00:14:05.941 }, 00:14:05.941 { 00:14:05.941 "name": "BaseBdev4", 00:14:05.941 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:05.941 "is_configured": true, 00:14:05.941 "data_offset": 2048, 00:14:05.941 "data_size": 63488 00:14:05.941 } 00:14:05.941 ] 00:14:05.941 }' 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.941 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.201 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.201 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.201 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.201 [2024-11-20 03:20:55.733700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.201 [2024-11-20 03:20:55.733887] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:06.201 [2024-11-20 03:20:55.733904] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:06.201 [2024-11-20 03:20:55.733939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.201 [2024-11-20 03:20:55.749094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:06.201 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.201 03:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:06.201 [2024-11-20 03:20:55.751014] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.141 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.401 "name": "raid_bdev1", 00:14:07.401 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:07.401 "strip_size_kb": 0, 00:14:07.401 "state": "online", 00:14:07.401 "raid_level": "raid1", 00:14:07.401 "superblock": true, 00:14:07.401 "num_base_bdevs": 4, 00:14:07.401 "num_base_bdevs_discovered": 3, 00:14:07.401 "num_base_bdevs_operational": 3, 00:14:07.401 "process": { 00:14:07.401 "type": "rebuild", 00:14:07.401 "target": "spare", 00:14:07.401 "progress": { 00:14:07.401 "blocks": 20480, 00:14:07.401 "percent": 32 00:14:07.401 } 00:14:07.401 }, 00:14:07.401 "base_bdevs_list": [ 00:14:07.401 { 00:14:07.401 "name": "spare", 00:14:07.401 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:07.401 "is_configured": true, 00:14:07.401 "data_offset": 2048, 00:14:07.401 "data_size": 63488 00:14:07.401 }, 00:14:07.401 { 00:14:07.401 "name": null, 00:14:07.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.401 "is_configured": false, 00:14:07.401 "data_offset": 2048, 00:14:07.401 "data_size": 63488 00:14:07.401 }, 00:14:07.401 { 00:14:07.401 "name": "BaseBdev3", 00:14:07.401 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:07.401 "is_configured": true, 00:14:07.401 "data_offset": 2048, 00:14:07.401 "data_size": 63488 00:14:07.401 }, 00:14:07.401 { 00:14:07.401 "name": "BaseBdev4", 00:14:07.401 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:07.401 "is_configured": true, 00:14:07.401 "data_offset": 2048, 00:14:07.401 "data_size": 63488 00:14:07.401 } 00:14:07.401 ] 00:14:07.401 }' 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.401 [2024-11-20 03:20:56.910649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.401 [2024-11-20 03:20:56.956024] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.401 [2024-11-20 03:20:56.956141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.401 [2024-11-20 03:20:56.956159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.401 [2024-11-20 03:20:56.956171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.401 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 03:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.402 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.662 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.662 "name": "raid_bdev1", 00:14:07.662 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:07.662 "strip_size_kb": 0, 00:14:07.662 "state": "online", 00:14:07.662 "raid_level": "raid1", 00:14:07.662 "superblock": true, 00:14:07.662 "num_base_bdevs": 4, 00:14:07.662 "num_base_bdevs_discovered": 2, 00:14:07.662 "num_base_bdevs_operational": 2, 00:14:07.662 "base_bdevs_list": [ 00:14:07.662 { 00:14:07.662 "name": null, 00:14:07.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.662 "is_configured": false, 00:14:07.662 "data_offset": 0, 00:14:07.662 "data_size": 63488 00:14:07.662 }, 00:14:07.662 { 00:14:07.662 "name": null, 00:14:07.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.662 "is_configured": false, 00:14:07.662 "data_offset": 2048, 00:14:07.662 "data_size": 63488 00:14:07.662 }, 00:14:07.662 { 00:14:07.662 "name": "BaseBdev3", 00:14:07.662 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:07.662 "is_configured": true, 00:14:07.662 "data_offset": 2048, 00:14:07.662 "data_size": 63488 00:14:07.662 }, 00:14:07.662 { 00:14:07.662 "name": "BaseBdev4", 00:14:07.662 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:07.662 "is_configured": true, 00:14:07.662 "data_offset": 2048, 00:14:07.662 "data_size": 63488 00:14:07.662 } 00:14:07.662 ] 00:14:07.662 }' 00:14:07.662 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.662 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.922 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.922 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.922 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.922 [2024-11-20 03:20:57.409766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.922 [2024-11-20 03:20:57.409836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.922 [2024-11-20 03:20:57.409863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:07.922 [2024-11-20 03:20:57.409875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.922 [2024-11-20 03:20:57.410400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.922 [2024-11-20 03:20:57.410448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.922 [2024-11-20 03:20:57.410557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:07.922 [2024-11-20 03:20:57.410577] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:07.922 [2024-11-20 03:20:57.410588] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:07.922 [2024-11-20 03:20:57.410632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.922 [2024-11-20 03:20:57.425617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:07.922 spare 00:14:07.922 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.922 03:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:07.922 [2024-11-20 03:20:57.427651] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.863 "name": "raid_bdev1", 00:14:08.863 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:08.863 "strip_size_kb": 0, 00:14:08.863 "state": "online", 00:14:08.863 "raid_level": "raid1", 00:14:08.863 "superblock": true, 00:14:08.863 "num_base_bdevs": 4, 00:14:08.863 "num_base_bdevs_discovered": 3, 00:14:08.863 "num_base_bdevs_operational": 3, 00:14:08.863 "process": { 00:14:08.863 "type": "rebuild", 00:14:08.863 "target": "spare", 00:14:08.863 "progress": { 00:14:08.863 "blocks": 20480, 00:14:08.863 "percent": 32 00:14:08.863 } 00:14:08.863 }, 00:14:08.863 "base_bdevs_list": [ 00:14:08.863 { 00:14:08.863 "name": "spare", 00:14:08.863 "uuid": "00d5a517-15bb-500a-82bd-0c09423b5562", 00:14:08.863 "is_configured": true, 00:14:08.863 "data_offset": 2048, 00:14:08.863 "data_size": 63488 00:14:08.863 }, 00:14:08.863 { 00:14:08.863 "name": null, 00:14:08.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.863 "is_configured": false, 00:14:08.863 "data_offset": 2048, 00:14:08.863 "data_size": 63488 00:14:08.863 }, 00:14:08.863 { 00:14:08.863 "name": "BaseBdev3", 00:14:08.863 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:08.863 "is_configured": true, 00:14:08.863 "data_offset": 2048, 00:14:08.863 "data_size": 63488 00:14:08.863 }, 00:14:08.863 { 00:14:08.863 "name": "BaseBdev4", 00:14:08.863 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:08.863 "is_configured": true, 00:14:08.863 "data_offset": 2048, 00:14:08.863 "data_size": 63488 00:14:08.863 } 00:14:08.863 ] 00:14:08.863 }' 00:14:08.863 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.123 [2024-11-20 03:20:58.583421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.123 [2024-11-20 03:20:58.632896] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.123 [2024-11-20 03:20:58.632983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.123 [2024-11-20 03:20:58.633005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.123 [2024-11-20 03:20:58.633013] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.123 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.124 "name": "raid_bdev1", 00:14:09.124 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:09.124 "strip_size_kb": 0, 00:14:09.124 "state": "online", 00:14:09.124 "raid_level": "raid1", 00:14:09.124 "superblock": true, 00:14:09.124 "num_base_bdevs": 4, 00:14:09.124 "num_base_bdevs_discovered": 2, 00:14:09.124 "num_base_bdevs_operational": 2, 00:14:09.124 "base_bdevs_list": [ 00:14:09.124 { 00:14:09.124 "name": null, 00:14:09.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.124 "is_configured": false, 00:14:09.124 "data_offset": 0, 00:14:09.124 "data_size": 63488 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "name": null, 00:14:09.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.124 "is_configured": false, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "name": "BaseBdev3", 00:14:09.124 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:09.124 "is_configured": true, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "name": "BaseBdev4", 00:14:09.124 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:09.124 "is_configured": true, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 } 00:14:09.124 ] 00:14:09.124 }' 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.124 03:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.694 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.694 "name": "raid_bdev1", 00:14:09.695 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:09.695 "strip_size_kb": 0, 00:14:09.695 "state": "online", 00:14:09.695 "raid_level": "raid1", 00:14:09.695 "superblock": true, 00:14:09.695 "num_base_bdevs": 4, 00:14:09.695 "num_base_bdevs_discovered": 2, 00:14:09.695 "num_base_bdevs_operational": 2, 00:14:09.695 "base_bdevs_list": [ 00:14:09.695 { 00:14:09.695 "name": null, 00:14:09.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.695 "is_configured": false, 00:14:09.695 "data_offset": 0, 00:14:09.695 "data_size": 63488 00:14:09.695 }, 00:14:09.695 { 00:14:09.695 "name": null, 00:14:09.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.695 "is_configured": false, 00:14:09.695 "data_offset": 2048, 00:14:09.695 "data_size": 63488 00:14:09.695 }, 00:14:09.695 { 00:14:09.695 "name": "BaseBdev3", 00:14:09.695 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:09.695 "is_configured": true, 00:14:09.695 "data_offset": 2048, 00:14:09.695 "data_size": 63488 00:14:09.695 }, 00:14:09.695 { 00:14:09.695 "name": "BaseBdev4", 00:14:09.695 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:09.695 "is_configured": true, 00:14:09.695 "data_offset": 2048, 00:14:09.695 "data_size": 63488 00:14:09.695 } 00:14:09.695 ] 00:14:09.695 }' 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 [2024-11-20 03:20:59.273231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.695 [2024-11-20 03:20:59.273302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.695 [2024-11-20 03:20:59.273324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:09.695 [2024-11-20 03:20:59.273333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.695 [2024-11-20 03:20:59.273810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.695 [2024-11-20 03:20:59.273840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.695 [2024-11-20 03:20:59.273921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:09.695 [2024-11-20 03:20:59.273944] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:09.695 [2024-11-20 03:20:59.273954] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:09.695 [2024-11-20 03:20:59.273963] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:09.695 BaseBdev1 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.695 03:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.077 "name": "raid_bdev1", 00:14:11.077 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:11.077 "strip_size_kb": 0, 00:14:11.077 "state": "online", 00:14:11.077 "raid_level": "raid1", 00:14:11.077 "superblock": true, 00:14:11.077 "num_base_bdevs": 4, 00:14:11.077 "num_base_bdevs_discovered": 2, 00:14:11.077 "num_base_bdevs_operational": 2, 00:14:11.077 "base_bdevs_list": [ 00:14:11.077 { 00:14:11.077 "name": null, 00:14:11.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.077 "is_configured": false, 00:14:11.077 "data_offset": 0, 00:14:11.077 "data_size": 63488 00:14:11.077 }, 00:14:11.077 { 00:14:11.077 "name": null, 00:14:11.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.077 "is_configured": false, 00:14:11.077 "data_offset": 2048, 00:14:11.077 "data_size": 63488 00:14:11.077 }, 00:14:11.077 { 00:14:11.077 "name": "BaseBdev3", 00:14:11.077 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:11.077 "is_configured": true, 00:14:11.077 "data_offset": 2048, 00:14:11.077 "data_size": 63488 00:14:11.077 }, 00:14:11.077 { 00:14:11.077 "name": "BaseBdev4", 00:14:11.077 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:11.077 "is_configured": true, 00:14:11.077 "data_offset": 2048, 00:14:11.077 "data_size": 63488 00:14:11.077 } 00:14:11.077 ] 00:14:11.077 }' 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.077 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.338 "name": "raid_bdev1", 00:14:11.338 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:11.338 "strip_size_kb": 0, 00:14:11.338 "state": "online", 00:14:11.338 "raid_level": "raid1", 00:14:11.338 "superblock": true, 00:14:11.338 "num_base_bdevs": 4, 00:14:11.338 "num_base_bdevs_discovered": 2, 00:14:11.338 "num_base_bdevs_operational": 2, 00:14:11.338 "base_bdevs_list": [ 00:14:11.338 { 00:14:11.338 "name": null, 00:14:11.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.338 "is_configured": false, 00:14:11.338 "data_offset": 0, 00:14:11.338 "data_size": 63488 00:14:11.338 }, 00:14:11.338 { 00:14:11.338 "name": null, 00:14:11.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.338 "is_configured": false, 00:14:11.338 "data_offset": 2048, 00:14:11.338 "data_size": 63488 00:14:11.338 }, 00:14:11.338 { 00:14:11.338 "name": "BaseBdev3", 00:14:11.338 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:11.338 "is_configured": true, 00:14:11.338 "data_offset": 2048, 00:14:11.338 "data_size": 63488 00:14:11.338 }, 00:14:11.338 { 00:14:11.338 "name": "BaseBdev4", 00:14:11.338 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:11.338 "is_configured": true, 00:14:11.338 "data_offset": 2048, 00:14:11.338 "data_size": 63488 00:14:11.338 } 00:14:11.338 ] 00:14:11.338 }' 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.338 [2024-11-20 03:21:00.882745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.338 [2024-11-20 03:21:00.882917] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:11.338 [2024-11-20 03:21:00.882932] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:11.338 request: 00:14:11.338 { 00:14:11.338 "base_bdev": "BaseBdev1", 00:14:11.338 "raid_bdev": "raid_bdev1", 00:14:11.338 "method": "bdev_raid_add_base_bdev", 00:14:11.338 "req_id": 1 00:14:11.338 } 00:14:11.338 Got JSON-RPC error response 00:14:11.338 response: 00:14:11.338 { 00:14:11.338 "code": -22, 00:14:11.338 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:11.338 } 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:11.338 03:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.279 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.539 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.539 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.539 "name": "raid_bdev1", 00:14:12.539 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:12.539 "strip_size_kb": 0, 00:14:12.539 "state": "online", 00:14:12.539 "raid_level": "raid1", 00:14:12.539 "superblock": true, 00:14:12.539 "num_base_bdevs": 4, 00:14:12.539 "num_base_bdevs_discovered": 2, 00:14:12.539 "num_base_bdevs_operational": 2, 00:14:12.539 "base_bdevs_list": [ 00:14:12.539 { 00:14:12.539 "name": null, 00:14:12.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.539 "is_configured": false, 00:14:12.539 "data_offset": 0, 00:14:12.539 "data_size": 63488 00:14:12.539 }, 00:14:12.539 { 00:14:12.539 "name": null, 00:14:12.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.539 "is_configured": false, 00:14:12.539 "data_offset": 2048, 00:14:12.539 "data_size": 63488 00:14:12.539 }, 00:14:12.539 { 00:14:12.539 "name": "BaseBdev3", 00:14:12.539 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:12.539 "is_configured": true, 00:14:12.539 "data_offset": 2048, 00:14:12.539 "data_size": 63488 00:14:12.539 }, 00:14:12.539 { 00:14:12.539 "name": "BaseBdev4", 00:14:12.539 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:12.539 "is_configured": true, 00:14:12.539 "data_offset": 2048, 00:14:12.539 "data_size": 63488 00:14:12.539 } 00:14:12.539 ] 00:14:12.539 }' 00:14:12.539 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.539 03:21:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.799 "name": "raid_bdev1", 00:14:12.799 "uuid": "15fe91d0-67f5-4ee3-8787-69fff8376f49", 00:14:12.799 "strip_size_kb": 0, 00:14:12.799 "state": "online", 00:14:12.799 "raid_level": "raid1", 00:14:12.799 "superblock": true, 00:14:12.799 "num_base_bdevs": 4, 00:14:12.799 "num_base_bdevs_discovered": 2, 00:14:12.799 "num_base_bdevs_operational": 2, 00:14:12.799 "base_bdevs_list": [ 00:14:12.799 { 00:14:12.799 "name": null, 00:14:12.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.799 "is_configured": false, 00:14:12.799 "data_offset": 0, 00:14:12.799 "data_size": 63488 00:14:12.799 }, 00:14:12.799 { 00:14:12.799 "name": null, 00:14:12.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.799 "is_configured": false, 00:14:12.799 "data_offset": 2048, 00:14:12.799 "data_size": 63488 00:14:12.799 }, 00:14:12.799 { 00:14:12.799 "name": "BaseBdev3", 00:14:12.799 "uuid": "e6c06df2-e21e-5774-bae8-f030d6b8b92a", 00:14:12.799 "is_configured": true, 00:14:12.799 "data_offset": 2048, 00:14:12.799 "data_size": 63488 00:14:12.799 }, 00:14:12.799 { 00:14:12.799 "name": "BaseBdev4", 00:14:12.799 "uuid": "b93c6be3-082b-5945-882e-677f857281d2", 00:14:12.799 "is_configured": true, 00:14:12.799 "data_offset": 2048, 00:14:12.799 "data_size": 63488 00:14:12.799 } 00:14:12.799 ] 00:14:12.799 }' 00:14:12.799 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78977 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78977 ']' 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78977 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78977 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.060 killing process with pid 78977 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78977' 00:14:13.060 Received shutdown signal, test time was about 17.858451 seconds 00:14:13.060 00:14:13.060 Latency(us) 00:14:13.060 [2024-11-20T03:21:02.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.060 [2024-11-20T03:21:02.695Z] =================================================================================================================== 00:14:13.060 [2024-11-20T03:21:02.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78977 00:14:13.060 03:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78977 00:14:13.060 [2024-11-20 03:21:02.514156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.060 [2024-11-20 03:21:02.514296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.060 [2024-11-20 03:21:02.514376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.060 [2024-11-20 03:21:02.514393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:13.320 [2024-11-20 03:21:02.948420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.702 03:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:14.702 00:14:14.702 real 0m21.235s 00:14:14.702 user 0m27.725s 00:14:14.702 sys 0m2.403s 00:14:14.702 03:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.702 03:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.702 ************************************ 00:14:14.702 END TEST raid_rebuild_test_sb_io 00:14:14.702 ************************************ 00:14:14.702 03:21:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:14.702 03:21:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:14.702 03:21:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:14.702 03:21:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.702 03:21:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.702 ************************************ 00:14:14.702 START TEST raid5f_state_function_test 00:14:14.702 ************************************ 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79699 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:14.702 Process raid pid: 79699 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79699' 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79699 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79699 ']' 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.702 03:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.702 [2024-11-20 03:21:04.279467] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:14.702 [2024-11-20 03:21:04.279628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.962 [2024-11-20 03:21:04.452310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.962 [2024-11-20 03:21:04.568276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.222 [2024-11-20 03:21:04.778311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.222 [2024-11-20 03:21:04.778350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.482 [2024-11-20 03:21:05.107571] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.482 [2024-11-20 03:21:05.107641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.482 [2024-11-20 03:21:05.107652] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.482 [2024-11-20 03:21:05.107662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.482 [2024-11-20 03:21:05.107673] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.482 [2024-11-20 03:21:05.107682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.482 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.742 "name": "Existed_Raid", 00:14:15.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.742 "strip_size_kb": 64, 00:14:15.742 "state": "configuring", 00:14:15.742 "raid_level": "raid5f", 00:14:15.742 "superblock": false, 00:14:15.742 "num_base_bdevs": 3, 00:14:15.742 "num_base_bdevs_discovered": 0, 00:14:15.742 "num_base_bdevs_operational": 3, 00:14:15.742 "base_bdevs_list": [ 00:14:15.742 { 00:14:15.742 "name": "BaseBdev1", 00:14:15.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.742 "is_configured": false, 00:14:15.742 "data_offset": 0, 00:14:15.742 "data_size": 0 00:14:15.742 }, 00:14:15.742 { 00:14:15.742 "name": "BaseBdev2", 00:14:15.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.742 "is_configured": false, 00:14:15.742 "data_offset": 0, 00:14:15.742 "data_size": 0 00:14:15.742 }, 00:14:15.742 { 00:14:15.742 "name": "BaseBdev3", 00:14:15.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.742 "is_configured": false, 00:14:15.742 "data_offset": 0, 00:14:15.742 "data_size": 0 00:14:15.742 } 00:14:15.742 ] 00:14:15.742 }' 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.742 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.003 [2024-11-20 03:21:05.558759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.003 [2024-11-20 03:21:05.558796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.003 [2024-11-20 03:21:05.570735] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.003 [2024-11-20 03:21:05.570795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.003 [2024-11-20 03:21:05.570804] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.003 [2024-11-20 03:21:05.570815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.003 [2024-11-20 03:21:05.570822] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.003 [2024-11-20 03:21:05.570831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.003 [2024-11-20 03:21:05.618749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.003 BaseBdev1 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.003 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.275 [ 00:14:16.275 { 00:14:16.275 "name": "BaseBdev1", 00:14:16.275 "aliases": [ 00:14:16.275 "5cae6507-bb28-424c-a4a0-dd9f83e3e0be" 00:14:16.275 ], 00:14:16.275 "product_name": "Malloc disk", 00:14:16.275 "block_size": 512, 00:14:16.275 "num_blocks": 65536, 00:14:16.275 "uuid": "5cae6507-bb28-424c-a4a0-dd9f83e3e0be", 00:14:16.275 "assigned_rate_limits": { 00:14:16.275 "rw_ios_per_sec": 0, 00:14:16.275 "rw_mbytes_per_sec": 0, 00:14:16.275 "r_mbytes_per_sec": 0, 00:14:16.275 "w_mbytes_per_sec": 0 00:14:16.275 }, 00:14:16.275 "claimed": true, 00:14:16.275 "claim_type": "exclusive_write", 00:14:16.275 "zoned": false, 00:14:16.275 "supported_io_types": { 00:14:16.275 "read": true, 00:14:16.275 "write": true, 00:14:16.275 "unmap": true, 00:14:16.275 "flush": true, 00:14:16.275 "reset": true, 00:14:16.275 "nvme_admin": false, 00:14:16.275 "nvme_io": false, 00:14:16.275 "nvme_io_md": false, 00:14:16.275 "write_zeroes": true, 00:14:16.275 "zcopy": true, 00:14:16.275 "get_zone_info": false, 00:14:16.275 "zone_management": false, 00:14:16.275 "zone_append": false, 00:14:16.275 "compare": false, 00:14:16.275 "compare_and_write": false, 00:14:16.275 "abort": true, 00:14:16.275 "seek_hole": false, 00:14:16.275 "seek_data": false, 00:14:16.275 "copy": true, 00:14:16.275 "nvme_iov_md": false 00:14:16.275 }, 00:14:16.275 "memory_domains": [ 00:14:16.275 { 00:14:16.275 "dma_device_id": "system", 00:14:16.275 "dma_device_type": 1 00:14:16.275 }, 00:14:16.275 { 00:14:16.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.275 "dma_device_type": 2 00:14:16.275 } 00:14:16.275 ], 00:14:16.275 "driver_specific": {} 00:14:16.275 } 00:14:16.275 ] 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.275 "name": "Existed_Raid", 00:14:16.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.275 "strip_size_kb": 64, 00:14:16.275 "state": "configuring", 00:14:16.275 "raid_level": "raid5f", 00:14:16.275 "superblock": false, 00:14:16.275 "num_base_bdevs": 3, 00:14:16.275 "num_base_bdevs_discovered": 1, 00:14:16.275 "num_base_bdevs_operational": 3, 00:14:16.275 "base_bdevs_list": [ 00:14:16.275 { 00:14:16.275 "name": "BaseBdev1", 00:14:16.275 "uuid": "5cae6507-bb28-424c-a4a0-dd9f83e3e0be", 00:14:16.275 "is_configured": true, 00:14:16.275 "data_offset": 0, 00:14:16.275 "data_size": 65536 00:14:16.275 }, 00:14:16.275 { 00:14:16.275 "name": "BaseBdev2", 00:14:16.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.275 "is_configured": false, 00:14:16.275 "data_offset": 0, 00:14:16.275 "data_size": 0 00:14:16.275 }, 00:14:16.275 { 00:14:16.275 "name": "BaseBdev3", 00:14:16.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.275 "is_configured": false, 00:14:16.275 "data_offset": 0, 00:14:16.275 "data_size": 0 00:14:16.275 } 00:14:16.275 ] 00:14:16.275 }' 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.275 03:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.570 [2024-11-20 03:21:06.106067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.570 [2024-11-20 03:21:06.106126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.570 [2024-11-20 03:21:06.114085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.570 [2024-11-20 03:21:06.115927] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.570 [2024-11-20 03:21:06.115968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.570 [2024-11-20 03:21:06.115994] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.570 [2024-11-20 03:21:06.116003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.570 "name": "Existed_Raid", 00:14:16.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.570 "strip_size_kb": 64, 00:14:16.570 "state": "configuring", 00:14:16.570 "raid_level": "raid5f", 00:14:16.570 "superblock": false, 00:14:16.570 "num_base_bdevs": 3, 00:14:16.570 "num_base_bdevs_discovered": 1, 00:14:16.570 "num_base_bdevs_operational": 3, 00:14:16.570 "base_bdevs_list": [ 00:14:16.570 { 00:14:16.570 "name": "BaseBdev1", 00:14:16.570 "uuid": "5cae6507-bb28-424c-a4a0-dd9f83e3e0be", 00:14:16.570 "is_configured": true, 00:14:16.570 "data_offset": 0, 00:14:16.570 "data_size": 65536 00:14:16.570 }, 00:14:16.570 { 00:14:16.570 "name": "BaseBdev2", 00:14:16.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.570 "is_configured": false, 00:14:16.570 "data_offset": 0, 00:14:16.570 "data_size": 0 00:14:16.570 }, 00:14:16.570 { 00:14:16.570 "name": "BaseBdev3", 00:14:16.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.570 "is_configured": false, 00:14:16.570 "data_offset": 0, 00:14:16.570 "data_size": 0 00:14:16.570 } 00:14:16.570 ] 00:14:16.570 }' 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.570 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.140 [2024-11-20 03:21:06.629967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.140 BaseBdev2 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.140 [ 00:14:17.140 { 00:14:17.140 "name": "BaseBdev2", 00:14:17.140 "aliases": [ 00:14:17.140 "b6f18d4d-bbce-42a8-8859-7a16407f25d4" 00:14:17.140 ], 00:14:17.140 "product_name": "Malloc disk", 00:14:17.140 "block_size": 512, 00:14:17.140 "num_blocks": 65536, 00:14:17.140 "uuid": "b6f18d4d-bbce-42a8-8859-7a16407f25d4", 00:14:17.140 "assigned_rate_limits": { 00:14:17.140 "rw_ios_per_sec": 0, 00:14:17.140 "rw_mbytes_per_sec": 0, 00:14:17.140 "r_mbytes_per_sec": 0, 00:14:17.140 "w_mbytes_per_sec": 0 00:14:17.140 }, 00:14:17.140 "claimed": true, 00:14:17.140 "claim_type": "exclusive_write", 00:14:17.140 "zoned": false, 00:14:17.140 "supported_io_types": { 00:14:17.140 "read": true, 00:14:17.140 "write": true, 00:14:17.140 "unmap": true, 00:14:17.140 "flush": true, 00:14:17.140 "reset": true, 00:14:17.140 "nvme_admin": false, 00:14:17.140 "nvme_io": false, 00:14:17.140 "nvme_io_md": false, 00:14:17.140 "write_zeroes": true, 00:14:17.140 "zcopy": true, 00:14:17.140 "get_zone_info": false, 00:14:17.140 "zone_management": false, 00:14:17.140 "zone_append": false, 00:14:17.140 "compare": false, 00:14:17.140 "compare_and_write": false, 00:14:17.140 "abort": true, 00:14:17.140 "seek_hole": false, 00:14:17.140 "seek_data": false, 00:14:17.140 "copy": true, 00:14:17.140 "nvme_iov_md": false 00:14:17.140 }, 00:14:17.140 "memory_domains": [ 00:14:17.140 { 00:14:17.140 "dma_device_id": "system", 00:14:17.140 "dma_device_type": 1 00:14:17.140 }, 00:14:17.140 { 00:14:17.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.140 "dma_device_type": 2 00:14:17.140 } 00:14:17.140 ], 00:14:17.140 "driver_specific": {} 00:14:17.140 } 00:14:17.140 ] 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.140 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.140 "name": "Existed_Raid", 00:14:17.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.140 "strip_size_kb": 64, 00:14:17.140 "state": "configuring", 00:14:17.140 "raid_level": "raid5f", 00:14:17.140 "superblock": false, 00:14:17.140 "num_base_bdevs": 3, 00:14:17.140 "num_base_bdevs_discovered": 2, 00:14:17.140 "num_base_bdevs_operational": 3, 00:14:17.140 "base_bdevs_list": [ 00:14:17.140 { 00:14:17.140 "name": "BaseBdev1", 00:14:17.140 "uuid": "5cae6507-bb28-424c-a4a0-dd9f83e3e0be", 00:14:17.140 "is_configured": true, 00:14:17.140 "data_offset": 0, 00:14:17.140 "data_size": 65536 00:14:17.141 }, 00:14:17.141 { 00:14:17.141 "name": "BaseBdev2", 00:14:17.141 "uuid": "b6f18d4d-bbce-42a8-8859-7a16407f25d4", 00:14:17.141 "is_configured": true, 00:14:17.141 "data_offset": 0, 00:14:17.141 "data_size": 65536 00:14:17.141 }, 00:14:17.141 { 00:14:17.141 "name": "BaseBdev3", 00:14:17.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.141 "is_configured": false, 00:14:17.141 "data_offset": 0, 00:14:17.141 "data_size": 0 00:14:17.141 } 00:14:17.141 ] 00:14:17.141 }' 00:14:17.141 03:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.141 03:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.709 [2024-11-20 03:21:07.190117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.709 [2024-11-20 03:21:07.190182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:17.709 [2024-11-20 03:21:07.190195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:17.709 [2024-11-20 03:21:07.190489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:17.709 [2024-11-20 03:21:07.196589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:17.709 [2024-11-20 03:21:07.196619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:17.709 [2024-11-20 03:21:07.196939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.709 BaseBdev3 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.709 [ 00:14:17.709 { 00:14:17.709 "name": "BaseBdev3", 00:14:17.709 "aliases": [ 00:14:17.709 "73267f70-af29-4a93-b4ce-0c2a7438898c" 00:14:17.709 ], 00:14:17.709 "product_name": "Malloc disk", 00:14:17.709 "block_size": 512, 00:14:17.709 "num_blocks": 65536, 00:14:17.709 "uuid": "73267f70-af29-4a93-b4ce-0c2a7438898c", 00:14:17.709 "assigned_rate_limits": { 00:14:17.709 "rw_ios_per_sec": 0, 00:14:17.709 "rw_mbytes_per_sec": 0, 00:14:17.709 "r_mbytes_per_sec": 0, 00:14:17.709 "w_mbytes_per_sec": 0 00:14:17.709 }, 00:14:17.709 "claimed": true, 00:14:17.709 "claim_type": "exclusive_write", 00:14:17.709 "zoned": false, 00:14:17.709 "supported_io_types": { 00:14:17.709 "read": true, 00:14:17.709 "write": true, 00:14:17.709 "unmap": true, 00:14:17.709 "flush": true, 00:14:17.709 "reset": true, 00:14:17.709 "nvme_admin": false, 00:14:17.709 "nvme_io": false, 00:14:17.709 "nvme_io_md": false, 00:14:17.709 "write_zeroes": true, 00:14:17.709 "zcopy": true, 00:14:17.709 "get_zone_info": false, 00:14:17.709 "zone_management": false, 00:14:17.709 "zone_append": false, 00:14:17.709 "compare": false, 00:14:17.709 "compare_and_write": false, 00:14:17.709 "abort": true, 00:14:17.709 "seek_hole": false, 00:14:17.709 "seek_data": false, 00:14:17.709 "copy": true, 00:14:17.709 "nvme_iov_md": false 00:14:17.709 }, 00:14:17.709 "memory_domains": [ 00:14:17.709 { 00:14:17.709 "dma_device_id": "system", 00:14:17.709 "dma_device_type": 1 00:14:17.709 }, 00:14:17.709 { 00:14:17.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.709 "dma_device_type": 2 00:14:17.709 } 00:14:17.709 ], 00:14:17.709 "driver_specific": {} 00:14:17.709 } 00:14:17.709 ] 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.709 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.709 "name": "Existed_Raid", 00:14:17.709 "uuid": "4117b055-fa00-42d1-81a6-436f7add22c3", 00:14:17.709 "strip_size_kb": 64, 00:14:17.709 "state": "online", 00:14:17.709 "raid_level": "raid5f", 00:14:17.709 "superblock": false, 00:14:17.709 "num_base_bdevs": 3, 00:14:17.709 "num_base_bdevs_discovered": 3, 00:14:17.709 "num_base_bdevs_operational": 3, 00:14:17.709 "base_bdevs_list": [ 00:14:17.709 { 00:14:17.709 "name": "BaseBdev1", 00:14:17.709 "uuid": "5cae6507-bb28-424c-a4a0-dd9f83e3e0be", 00:14:17.709 "is_configured": true, 00:14:17.709 "data_offset": 0, 00:14:17.709 "data_size": 65536 00:14:17.709 }, 00:14:17.709 { 00:14:17.709 "name": "BaseBdev2", 00:14:17.709 "uuid": "b6f18d4d-bbce-42a8-8859-7a16407f25d4", 00:14:17.709 "is_configured": true, 00:14:17.709 "data_offset": 0, 00:14:17.709 "data_size": 65536 00:14:17.709 }, 00:14:17.709 { 00:14:17.709 "name": "BaseBdev3", 00:14:17.710 "uuid": "73267f70-af29-4a93-b4ce-0c2a7438898c", 00:14:17.710 "is_configured": true, 00:14:17.710 "data_offset": 0, 00:14:17.710 "data_size": 65536 00:14:17.710 } 00:14:17.710 ] 00:14:17.710 }' 00:14:17.710 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.710 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.276 [2024-11-20 03:21:07.703365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.276 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.276 "name": "Existed_Raid", 00:14:18.276 "aliases": [ 00:14:18.276 "4117b055-fa00-42d1-81a6-436f7add22c3" 00:14:18.276 ], 00:14:18.276 "product_name": "Raid Volume", 00:14:18.276 "block_size": 512, 00:14:18.276 "num_blocks": 131072, 00:14:18.276 "uuid": "4117b055-fa00-42d1-81a6-436f7add22c3", 00:14:18.276 "assigned_rate_limits": { 00:14:18.276 "rw_ios_per_sec": 0, 00:14:18.276 "rw_mbytes_per_sec": 0, 00:14:18.276 "r_mbytes_per_sec": 0, 00:14:18.276 "w_mbytes_per_sec": 0 00:14:18.276 }, 00:14:18.276 "claimed": false, 00:14:18.276 "zoned": false, 00:14:18.276 "supported_io_types": { 00:14:18.276 "read": true, 00:14:18.276 "write": true, 00:14:18.276 "unmap": false, 00:14:18.276 "flush": false, 00:14:18.276 "reset": true, 00:14:18.276 "nvme_admin": false, 00:14:18.276 "nvme_io": false, 00:14:18.276 "nvme_io_md": false, 00:14:18.276 "write_zeroes": true, 00:14:18.276 "zcopy": false, 00:14:18.276 "get_zone_info": false, 00:14:18.276 "zone_management": false, 00:14:18.276 "zone_append": false, 00:14:18.276 "compare": false, 00:14:18.276 "compare_and_write": false, 00:14:18.276 "abort": false, 00:14:18.276 "seek_hole": false, 00:14:18.276 "seek_data": false, 00:14:18.276 "copy": false, 00:14:18.276 "nvme_iov_md": false 00:14:18.276 }, 00:14:18.276 "driver_specific": { 00:14:18.276 "raid": { 00:14:18.276 "uuid": "4117b055-fa00-42d1-81a6-436f7add22c3", 00:14:18.277 "strip_size_kb": 64, 00:14:18.277 "state": "online", 00:14:18.277 "raid_level": "raid5f", 00:14:18.277 "superblock": false, 00:14:18.277 "num_base_bdevs": 3, 00:14:18.277 "num_base_bdevs_discovered": 3, 00:14:18.277 "num_base_bdevs_operational": 3, 00:14:18.277 "base_bdevs_list": [ 00:14:18.277 { 00:14:18.277 "name": "BaseBdev1", 00:14:18.277 "uuid": "5cae6507-bb28-424c-a4a0-dd9f83e3e0be", 00:14:18.277 "is_configured": true, 00:14:18.277 "data_offset": 0, 00:14:18.277 "data_size": 65536 00:14:18.277 }, 00:14:18.277 { 00:14:18.277 "name": "BaseBdev2", 00:14:18.277 "uuid": "b6f18d4d-bbce-42a8-8859-7a16407f25d4", 00:14:18.277 "is_configured": true, 00:14:18.277 "data_offset": 0, 00:14:18.277 "data_size": 65536 00:14:18.277 }, 00:14:18.277 { 00:14:18.277 "name": "BaseBdev3", 00:14:18.277 "uuid": "73267f70-af29-4a93-b4ce-0c2a7438898c", 00:14:18.277 "is_configured": true, 00:14:18.277 "data_offset": 0, 00:14:18.277 "data_size": 65536 00:14:18.277 } 00:14:18.277 ] 00:14:18.277 } 00:14:18.277 } 00:14:18.277 }' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:18.277 BaseBdev2 00:14:18.277 BaseBdev3' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.277 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.536 03:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.536 [2024-11-20 03:21:07.986732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.536 "name": "Existed_Raid", 00:14:18.536 "uuid": "4117b055-fa00-42d1-81a6-436f7add22c3", 00:14:18.536 "strip_size_kb": 64, 00:14:18.536 "state": "online", 00:14:18.536 "raid_level": "raid5f", 00:14:18.536 "superblock": false, 00:14:18.536 "num_base_bdevs": 3, 00:14:18.536 "num_base_bdevs_discovered": 2, 00:14:18.536 "num_base_bdevs_operational": 2, 00:14:18.536 "base_bdevs_list": [ 00:14:18.536 { 00:14:18.536 "name": null, 00:14:18.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.536 "is_configured": false, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 }, 00:14:18.536 { 00:14:18.536 "name": "BaseBdev2", 00:14:18.536 "uuid": "b6f18d4d-bbce-42a8-8859-7a16407f25d4", 00:14:18.536 "is_configured": true, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 }, 00:14:18.536 { 00:14:18.536 "name": "BaseBdev3", 00:14:18.536 "uuid": "73267f70-af29-4a93-b4ce-0c2a7438898c", 00:14:18.536 "is_configured": true, 00:14:18.536 "data_offset": 0, 00:14:18.536 "data_size": 65536 00:14:18.536 } 00:14:18.536 ] 00:14:18.536 }' 00:14:18.536 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.537 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:19.103 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.104 [2024-11-20 03:21:08.570497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.104 [2024-11-20 03:21:08.570665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.104 [2024-11-20 03:21:08.670957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.104 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.104 [2024-11-20 03:21:08.730919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:19.104 [2024-11-20 03:21:08.730974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:19.362 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.362 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:19.362 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 BaseBdev2 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 [ 00:14:19.363 { 00:14:19.363 "name": "BaseBdev2", 00:14:19.363 "aliases": [ 00:14:19.363 "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6" 00:14:19.363 ], 00:14:19.363 "product_name": "Malloc disk", 00:14:19.363 "block_size": 512, 00:14:19.363 "num_blocks": 65536, 00:14:19.363 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:19.363 "assigned_rate_limits": { 00:14:19.363 "rw_ios_per_sec": 0, 00:14:19.363 "rw_mbytes_per_sec": 0, 00:14:19.363 "r_mbytes_per_sec": 0, 00:14:19.363 "w_mbytes_per_sec": 0 00:14:19.363 }, 00:14:19.363 "claimed": false, 00:14:19.363 "zoned": false, 00:14:19.363 "supported_io_types": { 00:14:19.363 "read": true, 00:14:19.363 "write": true, 00:14:19.363 "unmap": true, 00:14:19.363 "flush": true, 00:14:19.363 "reset": true, 00:14:19.363 "nvme_admin": false, 00:14:19.363 "nvme_io": false, 00:14:19.363 "nvme_io_md": false, 00:14:19.363 "write_zeroes": true, 00:14:19.363 "zcopy": true, 00:14:19.363 "get_zone_info": false, 00:14:19.363 "zone_management": false, 00:14:19.363 "zone_append": false, 00:14:19.363 "compare": false, 00:14:19.363 "compare_and_write": false, 00:14:19.363 "abort": true, 00:14:19.363 "seek_hole": false, 00:14:19.363 "seek_data": false, 00:14:19.363 "copy": true, 00:14:19.363 "nvme_iov_md": false 00:14:19.363 }, 00:14:19.363 "memory_domains": [ 00:14:19.363 { 00:14:19.363 "dma_device_id": "system", 00:14:19.363 "dma_device_type": 1 00:14:19.363 }, 00:14:19.363 { 00:14:19.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.363 "dma_device_type": 2 00:14:19.363 } 00:14:19.363 ], 00:14:19.363 "driver_specific": {} 00:14:19.363 } 00:14:19.363 ] 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.363 03:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 BaseBdev3 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.622 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 [ 00:14:19.622 { 00:14:19.622 "name": "BaseBdev3", 00:14:19.622 "aliases": [ 00:14:19.622 "d57d29e5-3398-45d4-996f-6e8b7c3cd113" 00:14:19.622 ], 00:14:19.622 "product_name": "Malloc disk", 00:14:19.622 "block_size": 512, 00:14:19.622 "num_blocks": 65536, 00:14:19.622 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:19.622 "assigned_rate_limits": { 00:14:19.622 "rw_ios_per_sec": 0, 00:14:19.622 "rw_mbytes_per_sec": 0, 00:14:19.622 "r_mbytes_per_sec": 0, 00:14:19.622 "w_mbytes_per_sec": 0 00:14:19.622 }, 00:14:19.622 "claimed": false, 00:14:19.622 "zoned": false, 00:14:19.622 "supported_io_types": { 00:14:19.622 "read": true, 00:14:19.622 "write": true, 00:14:19.622 "unmap": true, 00:14:19.622 "flush": true, 00:14:19.622 "reset": true, 00:14:19.622 "nvme_admin": false, 00:14:19.622 "nvme_io": false, 00:14:19.622 "nvme_io_md": false, 00:14:19.623 "write_zeroes": true, 00:14:19.623 "zcopy": true, 00:14:19.623 "get_zone_info": false, 00:14:19.623 "zone_management": false, 00:14:19.623 "zone_append": false, 00:14:19.623 "compare": false, 00:14:19.623 "compare_and_write": false, 00:14:19.623 "abort": true, 00:14:19.623 "seek_hole": false, 00:14:19.623 "seek_data": false, 00:14:19.623 "copy": true, 00:14:19.623 "nvme_iov_md": false 00:14:19.623 }, 00:14:19.623 "memory_domains": [ 00:14:19.623 { 00:14:19.623 "dma_device_id": "system", 00:14:19.623 "dma_device_type": 1 00:14:19.623 }, 00:14:19.623 { 00:14:19.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.623 "dma_device_type": 2 00:14:19.623 } 00:14:19.623 ], 00:14:19.623 "driver_specific": {} 00:14:19.623 } 00:14:19.623 ] 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.623 [2024-11-20 03:21:09.046434] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.623 [2024-11-20 03:21:09.046529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.623 [2024-11-20 03:21:09.046599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.623 [2024-11-20 03:21:09.048671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.623 "name": "Existed_Raid", 00:14:19.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.623 "strip_size_kb": 64, 00:14:19.623 "state": "configuring", 00:14:19.623 "raid_level": "raid5f", 00:14:19.623 "superblock": false, 00:14:19.623 "num_base_bdevs": 3, 00:14:19.623 "num_base_bdevs_discovered": 2, 00:14:19.623 "num_base_bdevs_operational": 3, 00:14:19.623 "base_bdevs_list": [ 00:14:19.623 { 00:14:19.623 "name": "BaseBdev1", 00:14:19.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.623 "is_configured": false, 00:14:19.623 "data_offset": 0, 00:14:19.623 "data_size": 0 00:14:19.623 }, 00:14:19.623 { 00:14:19.623 "name": "BaseBdev2", 00:14:19.623 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:19.623 "is_configured": true, 00:14:19.623 "data_offset": 0, 00:14:19.623 "data_size": 65536 00:14:19.623 }, 00:14:19.623 { 00:14:19.623 "name": "BaseBdev3", 00:14:19.623 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:19.623 "is_configured": true, 00:14:19.623 "data_offset": 0, 00:14:19.623 "data_size": 65536 00:14:19.623 } 00:14:19.623 ] 00:14:19.623 }' 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.623 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.191 [2024-11-20 03:21:09.537619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.191 "name": "Existed_Raid", 00:14:20.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.191 "strip_size_kb": 64, 00:14:20.191 "state": "configuring", 00:14:20.191 "raid_level": "raid5f", 00:14:20.191 "superblock": false, 00:14:20.191 "num_base_bdevs": 3, 00:14:20.191 "num_base_bdevs_discovered": 1, 00:14:20.191 "num_base_bdevs_operational": 3, 00:14:20.191 "base_bdevs_list": [ 00:14:20.191 { 00:14:20.191 "name": "BaseBdev1", 00:14:20.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.191 "is_configured": false, 00:14:20.191 "data_offset": 0, 00:14:20.191 "data_size": 0 00:14:20.191 }, 00:14:20.191 { 00:14:20.191 "name": null, 00:14:20.191 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:20.191 "is_configured": false, 00:14:20.191 "data_offset": 0, 00:14:20.191 "data_size": 65536 00:14:20.191 }, 00:14:20.191 { 00:14:20.191 "name": "BaseBdev3", 00:14:20.191 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:20.191 "is_configured": true, 00:14:20.191 "data_offset": 0, 00:14:20.191 "data_size": 65536 00:14:20.191 } 00:14:20.191 ] 00:14:20.191 }' 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.191 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.451 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.451 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.451 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.451 03:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:20.451 03:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.451 [2024-11-20 03:21:10.061032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.451 BaseBdev1 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.451 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.710 [ 00:14:20.710 { 00:14:20.710 "name": "BaseBdev1", 00:14:20.710 "aliases": [ 00:14:20.710 "a332af72-5e99-4c2f-aaf7-a7262f4186b7" 00:14:20.710 ], 00:14:20.710 "product_name": "Malloc disk", 00:14:20.710 "block_size": 512, 00:14:20.710 "num_blocks": 65536, 00:14:20.710 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:20.710 "assigned_rate_limits": { 00:14:20.710 "rw_ios_per_sec": 0, 00:14:20.710 "rw_mbytes_per_sec": 0, 00:14:20.710 "r_mbytes_per_sec": 0, 00:14:20.710 "w_mbytes_per_sec": 0 00:14:20.710 }, 00:14:20.710 "claimed": true, 00:14:20.710 "claim_type": "exclusive_write", 00:14:20.710 "zoned": false, 00:14:20.710 "supported_io_types": { 00:14:20.710 "read": true, 00:14:20.710 "write": true, 00:14:20.710 "unmap": true, 00:14:20.710 "flush": true, 00:14:20.710 "reset": true, 00:14:20.710 "nvme_admin": false, 00:14:20.710 "nvme_io": false, 00:14:20.710 "nvme_io_md": false, 00:14:20.710 "write_zeroes": true, 00:14:20.710 "zcopy": true, 00:14:20.710 "get_zone_info": false, 00:14:20.710 "zone_management": false, 00:14:20.710 "zone_append": false, 00:14:20.710 "compare": false, 00:14:20.710 "compare_and_write": false, 00:14:20.710 "abort": true, 00:14:20.710 "seek_hole": false, 00:14:20.710 "seek_data": false, 00:14:20.710 "copy": true, 00:14:20.710 "nvme_iov_md": false 00:14:20.710 }, 00:14:20.710 "memory_domains": [ 00:14:20.710 { 00:14:20.710 "dma_device_id": "system", 00:14:20.710 "dma_device_type": 1 00:14:20.710 }, 00:14:20.710 { 00:14:20.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.710 "dma_device_type": 2 00:14:20.710 } 00:14:20.710 ], 00:14:20.710 "driver_specific": {} 00:14:20.710 } 00:14:20.710 ] 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.710 "name": "Existed_Raid", 00:14:20.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.710 "strip_size_kb": 64, 00:14:20.710 "state": "configuring", 00:14:20.710 "raid_level": "raid5f", 00:14:20.710 "superblock": false, 00:14:20.710 "num_base_bdevs": 3, 00:14:20.710 "num_base_bdevs_discovered": 2, 00:14:20.710 "num_base_bdevs_operational": 3, 00:14:20.710 "base_bdevs_list": [ 00:14:20.710 { 00:14:20.710 "name": "BaseBdev1", 00:14:20.710 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:20.710 "is_configured": true, 00:14:20.710 "data_offset": 0, 00:14:20.710 "data_size": 65536 00:14:20.710 }, 00:14:20.710 { 00:14:20.710 "name": null, 00:14:20.710 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:20.710 "is_configured": false, 00:14:20.710 "data_offset": 0, 00:14:20.710 "data_size": 65536 00:14:20.710 }, 00:14:20.710 { 00:14:20.710 "name": "BaseBdev3", 00:14:20.710 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:20.710 "is_configured": true, 00:14:20.710 "data_offset": 0, 00:14:20.710 "data_size": 65536 00:14:20.710 } 00:14:20.710 ] 00:14:20.710 }' 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.710 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.970 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.970 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.970 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.970 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.970 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.229 [2024-11-20 03:21:10.616132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.229 "name": "Existed_Raid", 00:14:21.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.229 "strip_size_kb": 64, 00:14:21.229 "state": "configuring", 00:14:21.229 "raid_level": "raid5f", 00:14:21.229 "superblock": false, 00:14:21.229 "num_base_bdevs": 3, 00:14:21.229 "num_base_bdevs_discovered": 1, 00:14:21.229 "num_base_bdevs_operational": 3, 00:14:21.229 "base_bdevs_list": [ 00:14:21.229 { 00:14:21.229 "name": "BaseBdev1", 00:14:21.229 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:21.229 "is_configured": true, 00:14:21.229 "data_offset": 0, 00:14:21.229 "data_size": 65536 00:14:21.229 }, 00:14:21.229 { 00:14:21.229 "name": null, 00:14:21.229 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:21.229 "is_configured": false, 00:14:21.229 "data_offset": 0, 00:14:21.229 "data_size": 65536 00:14:21.229 }, 00:14:21.229 { 00:14:21.229 "name": null, 00:14:21.229 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:21.229 "is_configured": false, 00:14:21.229 "data_offset": 0, 00:14:21.229 "data_size": 65536 00:14:21.229 } 00:14:21.229 ] 00:14:21.229 }' 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.229 03:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.488 [2024-11-20 03:21:11.099346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.488 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.747 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.747 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.747 "name": "Existed_Raid", 00:14:21.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.747 "strip_size_kb": 64, 00:14:21.747 "state": "configuring", 00:14:21.747 "raid_level": "raid5f", 00:14:21.747 "superblock": false, 00:14:21.747 "num_base_bdevs": 3, 00:14:21.747 "num_base_bdevs_discovered": 2, 00:14:21.747 "num_base_bdevs_operational": 3, 00:14:21.747 "base_bdevs_list": [ 00:14:21.747 { 00:14:21.747 "name": "BaseBdev1", 00:14:21.747 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:21.747 "is_configured": true, 00:14:21.747 "data_offset": 0, 00:14:21.747 "data_size": 65536 00:14:21.747 }, 00:14:21.747 { 00:14:21.747 "name": null, 00:14:21.747 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:21.747 "is_configured": false, 00:14:21.747 "data_offset": 0, 00:14:21.747 "data_size": 65536 00:14:21.747 }, 00:14:21.747 { 00:14:21.747 "name": "BaseBdev3", 00:14:21.747 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:21.747 "is_configured": true, 00:14:21.747 "data_offset": 0, 00:14:21.747 "data_size": 65536 00:14:21.747 } 00:14:21.747 ] 00:14:21.747 }' 00:14:21.747 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.747 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.006 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.006 [2024-11-20 03:21:11.542648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.266 "name": "Existed_Raid", 00:14:22.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.266 "strip_size_kb": 64, 00:14:22.266 "state": "configuring", 00:14:22.266 "raid_level": "raid5f", 00:14:22.266 "superblock": false, 00:14:22.266 "num_base_bdevs": 3, 00:14:22.266 "num_base_bdevs_discovered": 1, 00:14:22.266 "num_base_bdevs_operational": 3, 00:14:22.266 "base_bdevs_list": [ 00:14:22.266 { 00:14:22.266 "name": null, 00:14:22.266 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:22.266 "is_configured": false, 00:14:22.266 "data_offset": 0, 00:14:22.266 "data_size": 65536 00:14:22.266 }, 00:14:22.266 { 00:14:22.266 "name": null, 00:14:22.266 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:22.266 "is_configured": false, 00:14:22.266 "data_offset": 0, 00:14:22.266 "data_size": 65536 00:14:22.266 }, 00:14:22.266 { 00:14:22.266 "name": "BaseBdev3", 00:14:22.266 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:22.266 "is_configured": true, 00:14:22.266 "data_offset": 0, 00:14:22.266 "data_size": 65536 00:14:22.266 } 00:14:22.266 ] 00:14:22.266 }' 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.266 03:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.525 [2024-11-20 03:21:12.111534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.525 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.784 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.784 "name": "Existed_Raid", 00:14:22.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.784 "strip_size_kb": 64, 00:14:22.784 "state": "configuring", 00:14:22.784 "raid_level": "raid5f", 00:14:22.784 "superblock": false, 00:14:22.784 "num_base_bdevs": 3, 00:14:22.784 "num_base_bdevs_discovered": 2, 00:14:22.784 "num_base_bdevs_operational": 3, 00:14:22.784 "base_bdevs_list": [ 00:14:22.784 { 00:14:22.784 "name": null, 00:14:22.784 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:22.784 "is_configured": false, 00:14:22.784 "data_offset": 0, 00:14:22.784 "data_size": 65536 00:14:22.784 }, 00:14:22.784 { 00:14:22.784 "name": "BaseBdev2", 00:14:22.784 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:22.784 "is_configured": true, 00:14:22.784 "data_offset": 0, 00:14:22.784 "data_size": 65536 00:14:22.784 }, 00:14:22.784 { 00:14:22.784 "name": "BaseBdev3", 00:14:22.784 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:22.784 "is_configured": true, 00:14:22.784 "data_offset": 0, 00:14:22.784 "data_size": 65536 00:14:22.784 } 00:14:22.784 ] 00:14:22.784 }' 00:14:22.784 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.784 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a332af72-5e99-4c2f-aaf7-a7262f4186b7 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.043 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.303 [2024-11-20 03:21:12.707924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:23.303 [2024-11-20 03:21:12.707974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:23.303 [2024-11-20 03:21:12.707983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:23.303 [2024-11-20 03:21:12.708215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:23.303 [2024-11-20 03:21:12.713563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:23.303 [2024-11-20 03:21:12.713583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:23.303 [2024-11-20 03:21:12.713841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.303 NewBaseBdev 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.303 [ 00:14:23.303 { 00:14:23.303 "name": "NewBaseBdev", 00:14:23.303 "aliases": [ 00:14:23.303 "a332af72-5e99-4c2f-aaf7-a7262f4186b7" 00:14:23.303 ], 00:14:23.303 "product_name": "Malloc disk", 00:14:23.303 "block_size": 512, 00:14:23.303 "num_blocks": 65536, 00:14:23.303 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:23.303 "assigned_rate_limits": { 00:14:23.303 "rw_ios_per_sec": 0, 00:14:23.303 "rw_mbytes_per_sec": 0, 00:14:23.303 "r_mbytes_per_sec": 0, 00:14:23.303 "w_mbytes_per_sec": 0 00:14:23.303 }, 00:14:23.303 "claimed": true, 00:14:23.303 "claim_type": "exclusive_write", 00:14:23.303 "zoned": false, 00:14:23.303 "supported_io_types": { 00:14:23.303 "read": true, 00:14:23.303 "write": true, 00:14:23.303 "unmap": true, 00:14:23.303 "flush": true, 00:14:23.303 "reset": true, 00:14:23.303 "nvme_admin": false, 00:14:23.303 "nvme_io": false, 00:14:23.303 "nvme_io_md": false, 00:14:23.303 "write_zeroes": true, 00:14:23.303 "zcopy": true, 00:14:23.303 "get_zone_info": false, 00:14:23.303 "zone_management": false, 00:14:23.303 "zone_append": false, 00:14:23.303 "compare": false, 00:14:23.303 "compare_and_write": false, 00:14:23.303 "abort": true, 00:14:23.303 "seek_hole": false, 00:14:23.303 "seek_data": false, 00:14:23.303 "copy": true, 00:14:23.303 "nvme_iov_md": false 00:14:23.303 }, 00:14:23.303 "memory_domains": [ 00:14:23.303 { 00:14:23.303 "dma_device_id": "system", 00:14:23.303 "dma_device_type": 1 00:14:23.303 }, 00:14:23.303 { 00:14:23.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.303 "dma_device_type": 2 00:14:23.303 } 00:14:23.303 ], 00:14:23.303 "driver_specific": {} 00:14:23.303 } 00:14:23.303 ] 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.303 "name": "Existed_Raid", 00:14:23.303 "uuid": "dd17e397-ba78-41b8-80f6-f7efc6374ba5", 00:14:23.303 "strip_size_kb": 64, 00:14:23.303 "state": "online", 00:14:23.303 "raid_level": "raid5f", 00:14:23.303 "superblock": false, 00:14:23.303 "num_base_bdevs": 3, 00:14:23.303 "num_base_bdevs_discovered": 3, 00:14:23.303 "num_base_bdevs_operational": 3, 00:14:23.303 "base_bdevs_list": [ 00:14:23.303 { 00:14:23.303 "name": "NewBaseBdev", 00:14:23.303 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:23.303 "is_configured": true, 00:14:23.303 "data_offset": 0, 00:14:23.303 "data_size": 65536 00:14:23.303 }, 00:14:23.303 { 00:14:23.303 "name": "BaseBdev2", 00:14:23.303 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:23.303 "is_configured": true, 00:14:23.303 "data_offset": 0, 00:14:23.303 "data_size": 65536 00:14:23.303 }, 00:14:23.303 { 00:14:23.303 "name": "BaseBdev3", 00:14:23.303 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:23.303 "is_configured": true, 00:14:23.303 "data_offset": 0, 00:14:23.303 "data_size": 65536 00:14:23.303 } 00:14:23.303 ] 00:14:23.303 }' 00:14:23.303 03:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.304 03:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 [2024-11-20 03:21:13.211744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.872 "name": "Existed_Raid", 00:14:23.872 "aliases": [ 00:14:23.872 "dd17e397-ba78-41b8-80f6-f7efc6374ba5" 00:14:23.872 ], 00:14:23.872 "product_name": "Raid Volume", 00:14:23.872 "block_size": 512, 00:14:23.872 "num_blocks": 131072, 00:14:23.872 "uuid": "dd17e397-ba78-41b8-80f6-f7efc6374ba5", 00:14:23.872 "assigned_rate_limits": { 00:14:23.872 "rw_ios_per_sec": 0, 00:14:23.872 "rw_mbytes_per_sec": 0, 00:14:23.872 "r_mbytes_per_sec": 0, 00:14:23.872 "w_mbytes_per_sec": 0 00:14:23.872 }, 00:14:23.872 "claimed": false, 00:14:23.872 "zoned": false, 00:14:23.872 "supported_io_types": { 00:14:23.872 "read": true, 00:14:23.872 "write": true, 00:14:23.872 "unmap": false, 00:14:23.872 "flush": false, 00:14:23.872 "reset": true, 00:14:23.872 "nvme_admin": false, 00:14:23.872 "nvme_io": false, 00:14:23.872 "nvme_io_md": false, 00:14:23.872 "write_zeroes": true, 00:14:23.872 "zcopy": false, 00:14:23.872 "get_zone_info": false, 00:14:23.872 "zone_management": false, 00:14:23.872 "zone_append": false, 00:14:23.872 "compare": false, 00:14:23.872 "compare_and_write": false, 00:14:23.872 "abort": false, 00:14:23.872 "seek_hole": false, 00:14:23.872 "seek_data": false, 00:14:23.872 "copy": false, 00:14:23.872 "nvme_iov_md": false 00:14:23.872 }, 00:14:23.872 "driver_specific": { 00:14:23.872 "raid": { 00:14:23.872 "uuid": "dd17e397-ba78-41b8-80f6-f7efc6374ba5", 00:14:23.872 "strip_size_kb": 64, 00:14:23.872 "state": "online", 00:14:23.872 "raid_level": "raid5f", 00:14:23.872 "superblock": false, 00:14:23.872 "num_base_bdevs": 3, 00:14:23.872 "num_base_bdevs_discovered": 3, 00:14:23.872 "num_base_bdevs_operational": 3, 00:14:23.872 "base_bdevs_list": [ 00:14:23.872 { 00:14:23.872 "name": "NewBaseBdev", 00:14:23.872 "uuid": "a332af72-5e99-4c2f-aaf7-a7262f4186b7", 00:14:23.872 "is_configured": true, 00:14:23.872 "data_offset": 0, 00:14:23.872 "data_size": 65536 00:14:23.872 }, 00:14:23.872 { 00:14:23.872 "name": "BaseBdev2", 00:14:23.872 "uuid": "ad4c3e7c-239b-4b2e-8a69-8a9f0e1149b6", 00:14:23.872 "is_configured": true, 00:14:23.872 "data_offset": 0, 00:14:23.872 "data_size": 65536 00:14:23.872 }, 00:14:23.872 { 00:14:23.872 "name": "BaseBdev3", 00:14:23.872 "uuid": "d57d29e5-3398-45d4-996f-6e8b7c3cd113", 00:14:23.872 "is_configured": true, 00:14:23.872 "data_offset": 0, 00:14:23.872 "data_size": 65536 00:14:23.872 } 00:14:23.872 ] 00:14:23.872 } 00:14:23.872 } 00:14:23.872 }' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:23.872 BaseBdev2 00:14:23.872 BaseBdev3' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.872 [2024-11-20 03:21:13.463087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.872 [2024-11-20 03:21:13.463115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.872 [2024-11-20 03:21:13.463188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.872 [2024-11-20 03:21:13.463465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.872 [2024-11-20 03:21:13.463480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79699 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79699 ']' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79699 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.872 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79699 00:14:24.132 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.132 killing process with pid 79699 00:14:24.132 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.132 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79699' 00:14:24.132 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79699 00:14:24.132 [2024-11-20 03:21:13.511264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.132 03:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79699 00:14:24.390 [2024-11-20 03:21:13.832355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.329 03:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:25.329 00:14:25.329 real 0m10.761s 00:14:25.329 user 0m17.115s 00:14:25.329 sys 0m1.959s 00:14:25.329 ************************************ 00:14:25.329 END TEST raid5f_state_function_test 00:14:25.329 ************************************ 00:14:25.329 03:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.329 03:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.598 03:21:14 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:25.598 03:21:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:25.598 03:21:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.598 03:21:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.598 ************************************ 00:14:25.598 START TEST raid5f_state_function_test_sb 00:14:25.598 ************************************ 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80326 00:14:25.598 Process raid pid: 80326 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80326' 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80326 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80326 ']' 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.598 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.599 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.599 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:25.599 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.599 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.599 [2024-11-20 03:21:15.103104] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:25.599 [2024-11-20 03:21:15.103226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.871 [2024-11-20 03:21:15.283660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.871 [2024-11-20 03:21:15.396418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.132 [2024-11-20 03:21:15.596358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.132 [2024-11-20 03:21:15.596398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.392 [2024-11-20 03:21:15.946704] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.392 [2024-11-20 03:21:15.946755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.392 [2024-11-20 03:21:15.946766] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.392 [2024-11-20 03:21:15.946793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.392 [2024-11-20 03:21:15.946800] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.392 [2024-11-20 03:21:15.946810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.392 03:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.392 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.392 "name": "Existed_Raid", 00:14:26.392 "uuid": "fae6938e-4e7c-46c4-8a39-827267d6d59e", 00:14:26.392 "strip_size_kb": 64, 00:14:26.392 "state": "configuring", 00:14:26.392 "raid_level": "raid5f", 00:14:26.392 "superblock": true, 00:14:26.392 "num_base_bdevs": 3, 00:14:26.392 "num_base_bdevs_discovered": 0, 00:14:26.392 "num_base_bdevs_operational": 3, 00:14:26.392 "base_bdevs_list": [ 00:14:26.392 { 00:14:26.392 "name": "BaseBdev1", 00:14:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.392 "is_configured": false, 00:14:26.392 "data_offset": 0, 00:14:26.392 "data_size": 0 00:14:26.392 }, 00:14:26.392 { 00:14:26.392 "name": "BaseBdev2", 00:14:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.392 "is_configured": false, 00:14:26.392 "data_offset": 0, 00:14:26.392 "data_size": 0 00:14:26.392 }, 00:14:26.392 { 00:14:26.392 "name": "BaseBdev3", 00:14:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.392 "is_configured": false, 00:14:26.392 "data_offset": 0, 00:14:26.392 "data_size": 0 00:14:26.392 } 00:14:26.392 ] 00:14:26.392 }' 00:14:26.392 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.392 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.963 [2024-11-20 03:21:16.385875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.963 [2024-11-20 03:21:16.385984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.963 [2024-11-20 03:21:16.393859] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.963 [2024-11-20 03:21:16.393943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.963 [2024-11-20 03:21:16.393971] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.963 [2024-11-20 03:21:16.393994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.963 [2024-11-20 03:21:16.394012] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.963 [2024-11-20 03:21:16.394033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.963 [2024-11-20 03:21:16.437676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.963 BaseBdev1 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.963 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.963 [ 00:14:26.963 { 00:14:26.963 "name": "BaseBdev1", 00:14:26.963 "aliases": [ 00:14:26.963 "53b44e2f-4172-450d-8289-1ceb75fd4900" 00:14:26.963 ], 00:14:26.963 "product_name": "Malloc disk", 00:14:26.963 "block_size": 512, 00:14:26.963 "num_blocks": 65536, 00:14:26.963 "uuid": "53b44e2f-4172-450d-8289-1ceb75fd4900", 00:14:26.963 "assigned_rate_limits": { 00:14:26.963 "rw_ios_per_sec": 0, 00:14:26.963 "rw_mbytes_per_sec": 0, 00:14:26.963 "r_mbytes_per_sec": 0, 00:14:26.963 "w_mbytes_per_sec": 0 00:14:26.963 }, 00:14:26.963 "claimed": true, 00:14:26.963 "claim_type": "exclusive_write", 00:14:26.963 "zoned": false, 00:14:26.963 "supported_io_types": { 00:14:26.963 "read": true, 00:14:26.963 "write": true, 00:14:26.963 "unmap": true, 00:14:26.963 "flush": true, 00:14:26.963 "reset": true, 00:14:26.963 "nvme_admin": false, 00:14:26.963 "nvme_io": false, 00:14:26.963 "nvme_io_md": false, 00:14:26.963 "write_zeroes": true, 00:14:26.963 "zcopy": true, 00:14:26.963 "get_zone_info": false, 00:14:26.963 "zone_management": false, 00:14:26.963 "zone_append": false, 00:14:26.964 "compare": false, 00:14:26.964 "compare_and_write": false, 00:14:26.964 "abort": true, 00:14:26.964 "seek_hole": false, 00:14:26.964 "seek_data": false, 00:14:26.964 "copy": true, 00:14:26.964 "nvme_iov_md": false 00:14:26.964 }, 00:14:26.964 "memory_domains": [ 00:14:26.964 { 00:14:26.964 "dma_device_id": "system", 00:14:26.964 "dma_device_type": 1 00:14:26.964 }, 00:14:26.964 { 00:14:26.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.964 "dma_device_type": 2 00:14:26.964 } 00:14:26.964 ], 00:14:26.964 "driver_specific": {} 00:14:26.964 } 00:14:26.964 ] 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.964 "name": "Existed_Raid", 00:14:26.964 "uuid": "a0ee21e0-cb84-4ec7-9a4a-a3b5c0209b1f", 00:14:26.964 "strip_size_kb": 64, 00:14:26.964 "state": "configuring", 00:14:26.964 "raid_level": "raid5f", 00:14:26.964 "superblock": true, 00:14:26.964 "num_base_bdevs": 3, 00:14:26.964 "num_base_bdevs_discovered": 1, 00:14:26.964 "num_base_bdevs_operational": 3, 00:14:26.964 "base_bdevs_list": [ 00:14:26.964 { 00:14:26.964 "name": "BaseBdev1", 00:14:26.964 "uuid": "53b44e2f-4172-450d-8289-1ceb75fd4900", 00:14:26.964 "is_configured": true, 00:14:26.964 "data_offset": 2048, 00:14:26.964 "data_size": 63488 00:14:26.964 }, 00:14:26.964 { 00:14:26.964 "name": "BaseBdev2", 00:14:26.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.964 "is_configured": false, 00:14:26.964 "data_offset": 0, 00:14:26.964 "data_size": 0 00:14:26.964 }, 00:14:26.964 { 00:14:26.964 "name": "BaseBdev3", 00:14:26.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.964 "is_configured": false, 00:14:26.964 "data_offset": 0, 00:14:26.964 "data_size": 0 00:14:26.964 } 00:14:26.964 ] 00:14:26.964 }' 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.964 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.535 [2024-11-20 03:21:16.868988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.535 [2024-11-20 03:21:16.869095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.535 [2024-11-20 03:21:16.881023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.535 [2024-11-20 03:21:16.882935] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.535 [2024-11-20 03:21:16.883014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.535 [2024-11-20 03:21:16.883049] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.535 [2024-11-20 03:21:16.883073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.535 "name": "Existed_Raid", 00:14:27.535 "uuid": "952b66e5-d312-4203-a017-71a314eb11f6", 00:14:27.535 "strip_size_kb": 64, 00:14:27.535 "state": "configuring", 00:14:27.535 "raid_level": "raid5f", 00:14:27.535 "superblock": true, 00:14:27.535 "num_base_bdevs": 3, 00:14:27.535 "num_base_bdevs_discovered": 1, 00:14:27.535 "num_base_bdevs_operational": 3, 00:14:27.535 "base_bdevs_list": [ 00:14:27.535 { 00:14:27.535 "name": "BaseBdev1", 00:14:27.535 "uuid": "53b44e2f-4172-450d-8289-1ceb75fd4900", 00:14:27.535 "is_configured": true, 00:14:27.535 "data_offset": 2048, 00:14:27.535 "data_size": 63488 00:14:27.535 }, 00:14:27.535 { 00:14:27.535 "name": "BaseBdev2", 00:14:27.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.535 "is_configured": false, 00:14:27.535 "data_offset": 0, 00:14:27.535 "data_size": 0 00:14:27.535 }, 00:14:27.535 { 00:14:27.535 "name": "BaseBdev3", 00:14:27.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.535 "is_configured": false, 00:14:27.535 "data_offset": 0, 00:14:27.535 "data_size": 0 00:14:27.535 } 00:14:27.535 ] 00:14:27.535 }' 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.535 03:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.796 [2024-11-20 03:21:17.369766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.796 BaseBdev2 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.796 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.796 [ 00:14:27.796 { 00:14:27.796 "name": "BaseBdev2", 00:14:27.796 "aliases": [ 00:14:27.796 "248b1995-7d62-4fbd-a64d-fc04fd1ab04e" 00:14:27.796 ], 00:14:27.796 "product_name": "Malloc disk", 00:14:27.796 "block_size": 512, 00:14:27.796 "num_blocks": 65536, 00:14:27.796 "uuid": "248b1995-7d62-4fbd-a64d-fc04fd1ab04e", 00:14:27.796 "assigned_rate_limits": { 00:14:27.796 "rw_ios_per_sec": 0, 00:14:27.796 "rw_mbytes_per_sec": 0, 00:14:27.796 "r_mbytes_per_sec": 0, 00:14:27.796 "w_mbytes_per_sec": 0 00:14:27.796 }, 00:14:27.796 "claimed": true, 00:14:27.796 "claim_type": "exclusive_write", 00:14:27.796 "zoned": false, 00:14:27.796 "supported_io_types": { 00:14:27.796 "read": true, 00:14:27.797 "write": true, 00:14:27.797 "unmap": true, 00:14:27.797 "flush": true, 00:14:27.797 "reset": true, 00:14:27.797 "nvme_admin": false, 00:14:27.797 "nvme_io": false, 00:14:27.797 "nvme_io_md": false, 00:14:27.797 "write_zeroes": true, 00:14:27.797 "zcopy": true, 00:14:27.797 "get_zone_info": false, 00:14:27.797 "zone_management": false, 00:14:27.797 "zone_append": false, 00:14:27.797 "compare": false, 00:14:27.797 "compare_and_write": false, 00:14:27.797 "abort": true, 00:14:27.797 "seek_hole": false, 00:14:27.797 "seek_data": false, 00:14:27.797 "copy": true, 00:14:27.797 "nvme_iov_md": false 00:14:27.797 }, 00:14:27.797 "memory_domains": [ 00:14:27.797 { 00:14:27.797 "dma_device_id": "system", 00:14:27.797 "dma_device_type": 1 00:14:27.797 }, 00:14:27.797 { 00:14:27.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.797 "dma_device_type": 2 00:14:27.797 } 00:14:27.797 ], 00:14:27.797 "driver_specific": {} 00:14:27.797 } 00:14:27.797 ] 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.797 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.058 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.058 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.058 "name": "Existed_Raid", 00:14:28.058 "uuid": "952b66e5-d312-4203-a017-71a314eb11f6", 00:14:28.058 "strip_size_kb": 64, 00:14:28.058 "state": "configuring", 00:14:28.058 "raid_level": "raid5f", 00:14:28.058 "superblock": true, 00:14:28.058 "num_base_bdevs": 3, 00:14:28.058 "num_base_bdevs_discovered": 2, 00:14:28.058 "num_base_bdevs_operational": 3, 00:14:28.058 "base_bdevs_list": [ 00:14:28.058 { 00:14:28.058 "name": "BaseBdev1", 00:14:28.058 "uuid": "53b44e2f-4172-450d-8289-1ceb75fd4900", 00:14:28.058 "is_configured": true, 00:14:28.058 "data_offset": 2048, 00:14:28.058 "data_size": 63488 00:14:28.058 }, 00:14:28.058 { 00:14:28.058 "name": "BaseBdev2", 00:14:28.058 "uuid": "248b1995-7d62-4fbd-a64d-fc04fd1ab04e", 00:14:28.058 "is_configured": true, 00:14:28.058 "data_offset": 2048, 00:14:28.058 "data_size": 63488 00:14:28.058 }, 00:14:28.058 { 00:14:28.058 "name": "BaseBdev3", 00:14:28.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.058 "is_configured": false, 00:14:28.058 "data_offset": 0, 00:14:28.058 "data_size": 0 00:14:28.058 } 00:14:28.058 ] 00:14:28.058 }' 00:14:28.058 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.058 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.319 [2024-11-20 03:21:17.921265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.319 [2024-11-20 03:21:17.921526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.319 [2024-11-20 03:21:17.921549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.319 [2024-11-20 03:21:17.922069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:28.319 BaseBdev3 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.319 [2024-11-20 03:21:17.928130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.319 [2024-11-20 03:21:17.928187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:28.319 [2024-11-20 03:21:17.928403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.319 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.319 [ 00:14:28.319 { 00:14:28.578 "name": "BaseBdev3", 00:14:28.578 "aliases": [ 00:14:28.578 "14d89ebb-abaf-42a5-8b49-3b7704f6bfa2" 00:14:28.578 ], 00:14:28.578 "product_name": "Malloc disk", 00:14:28.578 "block_size": 512, 00:14:28.578 "num_blocks": 65536, 00:14:28.578 "uuid": "14d89ebb-abaf-42a5-8b49-3b7704f6bfa2", 00:14:28.578 "assigned_rate_limits": { 00:14:28.578 "rw_ios_per_sec": 0, 00:14:28.578 "rw_mbytes_per_sec": 0, 00:14:28.578 "r_mbytes_per_sec": 0, 00:14:28.578 "w_mbytes_per_sec": 0 00:14:28.578 }, 00:14:28.578 "claimed": true, 00:14:28.578 "claim_type": "exclusive_write", 00:14:28.578 "zoned": false, 00:14:28.579 "supported_io_types": { 00:14:28.579 "read": true, 00:14:28.579 "write": true, 00:14:28.579 "unmap": true, 00:14:28.579 "flush": true, 00:14:28.579 "reset": true, 00:14:28.579 "nvme_admin": false, 00:14:28.579 "nvme_io": false, 00:14:28.579 "nvme_io_md": false, 00:14:28.579 "write_zeroes": true, 00:14:28.579 "zcopy": true, 00:14:28.579 "get_zone_info": false, 00:14:28.579 "zone_management": false, 00:14:28.579 "zone_append": false, 00:14:28.579 "compare": false, 00:14:28.579 "compare_and_write": false, 00:14:28.579 "abort": true, 00:14:28.579 "seek_hole": false, 00:14:28.579 "seek_data": false, 00:14:28.579 "copy": true, 00:14:28.579 "nvme_iov_md": false 00:14:28.579 }, 00:14:28.579 "memory_domains": [ 00:14:28.579 { 00:14:28.579 "dma_device_id": "system", 00:14:28.579 "dma_device_type": 1 00:14:28.579 }, 00:14:28.579 { 00:14:28.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.579 "dma_device_type": 2 00:14:28.579 } 00:14:28.579 ], 00:14:28.579 "driver_specific": {} 00:14:28.579 } 00:14:28.579 ] 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.579 03:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.579 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.579 "name": "Existed_Raid", 00:14:28.579 "uuid": "952b66e5-d312-4203-a017-71a314eb11f6", 00:14:28.579 "strip_size_kb": 64, 00:14:28.579 "state": "online", 00:14:28.579 "raid_level": "raid5f", 00:14:28.579 "superblock": true, 00:14:28.579 "num_base_bdevs": 3, 00:14:28.579 "num_base_bdevs_discovered": 3, 00:14:28.579 "num_base_bdevs_operational": 3, 00:14:28.579 "base_bdevs_list": [ 00:14:28.579 { 00:14:28.579 "name": "BaseBdev1", 00:14:28.579 "uuid": "53b44e2f-4172-450d-8289-1ceb75fd4900", 00:14:28.579 "is_configured": true, 00:14:28.579 "data_offset": 2048, 00:14:28.579 "data_size": 63488 00:14:28.579 }, 00:14:28.579 { 00:14:28.579 "name": "BaseBdev2", 00:14:28.579 "uuid": "248b1995-7d62-4fbd-a64d-fc04fd1ab04e", 00:14:28.579 "is_configured": true, 00:14:28.579 "data_offset": 2048, 00:14:28.579 "data_size": 63488 00:14:28.579 }, 00:14:28.579 { 00:14:28.579 "name": "BaseBdev3", 00:14:28.579 "uuid": "14d89ebb-abaf-42a5-8b49-3b7704f6bfa2", 00:14:28.579 "is_configured": true, 00:14:28.579 "data_offset": 2048, 00:14:28.579 "data_size": 63488 00:14:28.579 } 00:14:28.579 ] 00:14:28.579 }' 00:14:28.579 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.579 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.839 [2024-11-20 03:21:18.438379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.839 "name": "Existed_Raid", 00:14:28.839 "aliases": [ 00:14:28.839 "952b66e5-d312-4203-a017-71a314eb11f6" 00:14:28.839 ], 00:14:28.839 "product_name": "Raid Volume", 00:14:28.839 "block_size": 512, 00:14:28.839 "num_blocks": 126976, 00:14:28.839 "uuid": "952b66e5-d312-4203-a017-71a314eb11f6", 00:14:28.839 "assigned_rate_limits": { 00:14:28.839 "rw_ios_per_sec": 0, 00:14:28.839 "rw_mbytes_per_sec": 0, 00:14:28.839 "r_mbytes_per_sec": 0, 00:14:28.839 "w_mbytes_per_sec": 0 00:14:28.839 }, 00:14:28.839 "claimed": false, 00:14:28.839 "zoned": false, 00:14:28.839 "supported_io_types": { 00:14:28.839 "read": true, 00:14:28.839 "write": true, 00:14:28.839 "unmap": false, 00:14:28.839 "flush": false, 00:14:28.839 "reset": true, 00:14:28.839 "nvme_admin": false, 00:14:28.839 "nvme_io": false, 00:14:28.839 "nvme_io_md": false, 00:14:28.839 "write_zeroes": true, 00:14:28.839 "zcopy": false, 00:14:28.839 "get_zone_info": false, 00:14:28.839 "zone_management": false, 00:14:28.839 "zone_append": false, 00:14:28.839 "compare": false, 00:14:28.839 "compare_and_write": false, 00:14:28.839 "abort": false, 00:14:28.839 "seek_hole": false, 00:14:28.839 "seek_data": false, 00:14:28.839 "copy": false, 00:14:28.839 "nvme_iov_md": false 00:14:28.839 }, 00:14:28.839 "driver_specific": { 00:14:28.839 "raid": { 00:14:28.839 "uuid": "952b66e5-d312-4203-a017-71a314eb11f6", 00:14:28.839 "strip_size_kb": 64, 00:14:28.839 "state": "online", 00:14:28.839 "raid_level": "raid5f", 00:14:28.839 "superblock": true, 00:14:28.839 "num_base_bdevs": 3, 00:14:28.839 "num_base_bdevs_discovered": 3, 00:14:28.839 "num_base_bdevs_operational": 3, 00:14:28.839 "base_bdevs_list": [ 00:14:28.839 { 00:14:28.839 "name": "BaseBdev1", 00:14:28.839 "uuid": "53b44e2f-4172-450d-8289-1ceb75fd4900", 00:14:28.839 "is_configured": true, 00:14:28.839 "data_offset": 2048, 00:14:28.839 "data_size": 63488 00:14:28.839 }, 00:14:28.839 { 00:14:28.839 "name": "BaseBdev2", 00:14:28.839 "uuid": "248b1995-7d62-4fbd-a64d-fc04fd1ab04e", 00:14:28.839 "is_configured": true, 00:14:28.839 "data_offset": 2048, 00:14:28.839 "data_size": 63488 00:14:28.839 }, 00:14:28.839 { 00:14:28.839 "name": "BaseBdev3", 00:14:28.839 "uuid": "14d89ebb-abaf-42a5-8b49-3b7704f6bfa2", 00:14:28.839 "is_configured": true, 00:14:28.839 "data_offset": 2048, 00:14:28.839 "data_size": 63488 00:14:28.839 } 00:14:28.839 ] 00:14:28.839 } 00:14:28.839 } 00:14:28.839 }' 00:14:28.839 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:29.099 BaseBdev2 00:14:29.099 BaseBdev3' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.099 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 [2024-11-20 03:21:18.717765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.360 "name": "Existed_Raid", 00:14:29.360 "uuid": "952b66e5-d312-4203-a017-71a314eb11f6", 00:14:29.360 "strip_size_kb": 64, 00:14:29.360 "state": "online", 00:14:29.360 "raid_level": "raid5f", 00:14:29.360 "superblock": true, 00:14:29.360 "num_base_bdevs": 3, 00:14:29.360 "num_base_bdevs_discovered": 2, 00:14:29.360 "num_base_bdevs_operational": 2, 00:14:29.360 "base_bdevs_list": [ 00:14:29.360 { 00:14:29.360 "name": null, 00:14:29.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.360 "is_configured": false, 00:14:29.360 "data_offset": 0, 00:14:29.360 "data_size": 63488 00:14:29.360 }, 00:14:29.360 { 00:14:29.360 "name": "BaseBdev2", 00:14:29.360 "uuid": "248b1995-7d62-4fbd-a64d-fc04fd1ab04e", 00:14:29.360 "is_configured": true, 00:14:29.360 "data_offset": 2048, 00:14:29.360 "data_size": 63488 00:14:29.360 }, 00:14:29.360 { 00:14:29.360 "name": "BaseBdev3", 00:14:29.360 "uuid": "14d89ebb-abaf-42a5-8b49-3b7704f6bfa2", 00:14:29.360 "is_configured": true, 00:14:29.360 "data_offset": 2048, 00:14:29.360 "data_size": 63488 00:14:29.360 } 00:14:29.360 ] 00:14:29.360 }' 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.360 03:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 [2024-11-20 03:21:19.326376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.929 [2024-11-20 03:21:19.326532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.929 [2024-11-20 03:21:19.426305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.929 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 [2024-11-20 03:21:19.486244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.929 [2024-11-20 03:21:19.486297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 BaseBdev2 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 [ 00:14:30.190 { 00:14:30.190 "name": "BaseBdev2", 00:14:30.190 "aliases": [ 00:14:30.190 "34c00442-6821-4579-9ad3-e209929c5889" 00:14:30.190 ], 00:14:30.190 "product_name": "Malloc disk", 00:14:30.190 "block_size": 512, 00:14:30.190 "num_blocks": 65536, 00:14:30.190 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:30.190 "assigned_rate_limits": { 00:14:30.190 "rw_ios_per_sec": 0, 00:14:30.190 "rw_mbytes_per_sec": 0, 00:14:30.190 "r_mbytes_per_sec": 0, 00:14:30.190 "w_mbytes_per_sec": 0 00:14:30.190 }, 00:14:30.190 "claimed": false, 00:14:30.190 "zoned": false, 00:14:30.190 "supported_io_types": { 00:14:30.190 "read": true, 00:14:30.190 "write": true, 00:14:30.190 "unmap": true, 00:14:30.190 "flush": true, 00:14:30.190 "reset": true, 00:14:30.190 "nvme_admin": false, 00:14:30.190 "nvme_io": false, 00:14:30.190 "nvme_io_md": false, 00:14:30.190 "write_zeroes": true, 00:14:30.190 "zcopy": true, 00:14:30.190 "get_zone_info": false, 00:14:30.190 "zone_management": false, 00:14:30.190 "zone_append": false, 00:14:30.190 "compare": false, 00:14:30.190 "compare_and_write": false, 00:14:30.190 "abort": true, 00:14:30.190 "seek_hole": false, 00:14:30.190 "seek_data": false, 00:14:30.190 "copy": true, 00:14:30.190 "nvme_iov_md": false 00:14:30.190 }, 00:14:30.190 "memory_domains": [ 00:14:30.190 { 00:14:30.190 "dma_device_id": "system", 00:14:30.190 "dma_device_type": 1 00:14:30.190 }, 00:14:30.190 { 00:14:30.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.190 "dma_device_type": 2 00:14:30.190 } 00:14:30.190 ], 00:14:30.190 "driver_specific": {} 00:14:30.190 } 00:14:30.190 ] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 BaseBdev3 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.190 [ 00:14:30.190 { 00:14:30.190 "name": "BaseBdev3", 00:14:30.190 "aliases": [ 00:14:30.190 "0ac22b81-8fb6-4875-b36d-a3ca927aee58" 00:14:30.190 ], 00:14:30.190 "product_name": "Malloc disk", 00:14:30.190 "block_size": 512, 00:14:30.190 "num_blocks": 65536, 00:14:30.190 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:30.190 "assigned_rate_limits": { 00:14:30.190 "rw_ios_per_sec": 0, 00:14:30.190 "rw_mbytes_per_sec": 0, 00:14:30.190 "r_mbytes_per_sec": 0, 00:14:30.190 "w_mbytes_per_sec": 0 00:14:30.190 }, 00:14:30.190 "claimed": false, 00:14:30.190 "zoned": false, 00:14:30.190 "supported_io_types": { 00:14:30.190 "read": true, 00:14:30.190 "write": true, 00:14:30.190 "unmap": true, 00:14:30.190 "flush": true, 00:14:30.190 "reset": true, 00:14:30.190 "nvme_admin": false, 00:14:30.190 "nvme_io": false, 00:14:30.190 "nvme_io_md": false, 00:14:30.190 "write_zeroes": true, 00:14:30.190 "zcopy": true, 00:14:30.190 "get_zone_info": false, 00:14:30.190 "zone_management": false, 00:14:30.190 "zone_append": false, 00:14:30.190 "compare": false, 00:14:30.190 "compare_and_write": false, 00:14:30.190 "abort": true, 00:14:30.190 "seek_hole": false, 00:14:30.190 "seek_data": false, 00:14:30.190 "copy": true, 00:14:30.190 "nvme_iov_md": false 00:14:30.190 }, 00:14:30.190 "memory_domains": [ 00:14:30.190 { 00:14:30.190 "dma_device_id": "system", 00:14:30.190 "dma_device_type": 1 00:14:30.190 }, 00:14:30.190 { 00:14:30.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.190 "dma_device_type": 2 00:14:30.190 } 00:14:30.190 ], 00:14:30.190 "driver_specific": {} 00:14:30.190 } 00:14:30.190 ] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.190 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.191 [2024-11-20 03:21:19.819558] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.191 [2024-11-20 03:21:19.819665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.191 [2024-11-20 03:21:19.819712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.191 [2024-11-20 03:21:19.821667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.450 "name": "Existed_Raid", 00:14:30.450 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:30.450 "strip_size_kb": 64, 00:14:30.450 "state": "configuring", 00:14:30.450 "raid_level": "raid5f", 00:14:30.450 "superblock": true, 00:14:30.450 "num_base_bdevs": 3, 00:14:30.450 "num_base_bdevs_discovered": 2, 00:14:30.450 "num_base_bdevs_operational": 3, 00:14:30.450 "base_bdevs_list": [ 00:14:30.450 { 00:14:30.450 "name": "BaseBdev1", 00:14:30.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.450 "is_configured": false, 00:14:30.450 "data_offset": 0, 00:14:30.450 "data_size": 0 00:14:30.450 }, 00:14:30.450 { 00:14:30.450 "name": "BaseBdev2", 00:14:30.450 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:30.450 "is_configured": true, 00:14:30.450 "data_offset": 2048, 00:14:30.450 "data_size": 63488 00:14:30.450 }, 00:14:30.450 { 00:14:30.450 "name": "BaseBdev3", 00:14:30.450 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:30.450 "is_configured": true, 00:14:30.450 "data_offset": 2048, 00:14:30.450 "data_size": 63488 00:14:30.450 } 00:14:30.450 ] 00:14:30.450 }' 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.450 03:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.710 [2024-11-20 03:21:20.246848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.710 "name": "Existed_Raid", 00:14:30.710 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:30.710 "strip_size_kb": 64, 00:14:30.710 "state": "configuring", 00:14:30.710 "raid_level": "raid5f", 00:14:30.710 "superblock": true, 00:14:30.710 "num_base_bdevs": 3, 00:14:30.710 "num_base_bdevs_discovered": 1, 00:14:30.710 "num_base_bdevs_operational": 3, 00:14:30.710 "base_bdevs_list": [ 00:14:30.710 { 00:14:30.710 "name": "BaseBdev1", 00:14:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.710 "is_configured": false, 00:14:30.710 "data_offset": 0, 00:14:30.710 "data_size": 0 00:14:30.710 }, 00:14:30.710 { 00:14:30.710 "name": null, 00:14:30.710 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:30.710 "is_configured": false, 00:14:30.710 "data_offset": 0, 00:14:30.710 "data_size": 63488 00:14:30.710 }, 00:14:30.710 { 00:14:30.710 "name": "BaseBdev3", 00:14:30.710 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:30.710 "is_configured": true, 00:14:30.710 "data_offset": 2048, 00:14:30.710 "data_size": 63488 00:14:30.710 } 00:14:30.710 ] 00:14:30.710 }' 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.710 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.280 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.280 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.280 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.280 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.280 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.281 [2024-11-20 03:21:20.800888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.281 BaseBdev1 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.281 [ 00:14:31.281 { 00:14:31.281 "name": "BaseBdev1", 00:14:31.281 "aliases": [ 00:14:31.281 "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b" 00:14:31.281 ], 00:14:31.281 "product_name": "Malloc disk", 00:14:31.281 "block_size": 512, 00:14:31.281 "num_blocks": 65536, 00:14:31.281 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:31.281 "assigned_rate_limits": { 00:14:31.281 "rw_ios_per_sec": 0, 00:14:31.281 "rw_mbytes_per_sec": 0, 00:14:31.281 "r_mbytes_per_sec": 0, 00:14:31.281 "w_mbytes_per_sec": 0 00:14:31.281 }, 00:14:31.281 "claimed": true, 00:14:31.281 "claim_type": "exclusive_write", 00:14:31.281 "zoned": false, 00:14:31.281 "supported_io_types": { 00:14:31.281 "read": true, 00:14:31.281 "write": true, 00:14:31.281 "unmap": true, 00:14:31.281 "flush": true, 00:14:31.281 "reset": true, 00:14:31.281 "nvme_admin": false, 00:14:31.281 "nvme_io": false, 00:14:31.281 "nvme_io_md": false, 00:14:31.281 "write_zeroes": true, 00:14:31.281 "zcopy": true, 00:14:31.281 "get_zone_info": false, 00:14:31.281 "zone_management": false, 00:14:31.281 "zone_append": false, 00:14:31.281 "compare": false, 00:14:31.281 "compare_and_write": false, 00:14:31.281 "abort": true, 00:14:31.281 "seek_hole": false, 00:14:31.281 "seek_data": false, 00:14:31.281 "copy": true, 00:14:31.281 "nvme_iov_md": false 00:14:31.281 }, 00:14:31.281 "memory_domains": [ 00:14:31.281 { 00:14:31.281 "dma_device_id": "system", 00:14:31.281 "dma_device_type": 1 00:14:31.281 }, 00:14:31.281 { 00:14:31.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.281 "dma_device_type": 2 00:14:31.281 } 00:14:31.281 ], 00:14:31.281 "driver_specific": {} 00:14:31.281 } 00:14:31.281 ] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.281 "name": "Existed_Raid", 00:14:31.281 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:31.281 "strip_size_kb": 64, 00:14:31.281 "state": "configuring", 00:14:31.281 "raid_level": "raid5f", 00:14:31.281 "superblock": true, 00:14:31.281 "num_base_bdevs": 3, 00:14:31.281 "num_base_bdevs_discovered": 2, 00:14:31.281 "num_base_bdevs_operational": 3, 00:14:31.281 "base_bdevs_list": [ 00:14:31.281 { 00:14:31.281 "name": "BaseBdev1", 00:14:31.281 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:31.281 "is_configured": true, 00:14:31.281 "data_offset": 2048, 00:14:31.281 "data_size": 63488 00:14:31.281 }, 00:14:31.281 { 00:14:31.281 "name": null, 00:14:31.281 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:31.281 "is_configured": false, 00:14:31.281 "data_offset": 0, 00:14:31.281 "data_size": 63488 00:14:31.281 }, 00:14:31.281 { 00:14:31.281 "name": "BaseBdev3", 00:14:31.281 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:31.281 "is_configured": true, 00:14:31.281 "data_offset": 2048, 00:14:31.281 "data_size": 63488 00:14:31.281 } 00:14:31.281 ] 00:14:31.281 }' 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.281 03:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 [2024-11-20 03:21:21.387965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.852 "name": "Existed_Raid", 00:14:31.852 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:31.852 "strip_size_kb": 64, 00:14:31.852 "state": "configuring", 00:14:31.852 "raid_level": "raid5f", 00:14:31.852 "superblock": true, 00:14:31.852 "num_base_bdevs": 3, 00:14:31.852 "num_base_bdevs_discovered": 1, 00:14:31.852 "num_base_bdevs_operational": 3, 00:14:31.852 "base_bdevs_list": [ 00:14:31.852 { 00:14:31.852 "name": "BaseBdev1", 00:14:31.852 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:31.852 "is_configured": true, 00:14:31.852 "data_offset": 2048, 00:14:31.852 "data_size": 63488 00:14:31.852 }, 00:14:31.852 { 00:14:31.852 "name": null, 00:14:31.852 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:31.852 "is_configured": false, 00:14:31.852 "data_offset": 0, 00:14:31.852 "data_size": 63488 00:14:31.852 }, 00:14:31.852 { 00:14:31.852 "name": null, 00:14:31.852 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:31.852 "is_configured": false, 00:14:31.852 "data_offset": 0, 00:14:31.852 "data_size": 63488 00:14:31.852 } 00:14:31.852 ] 00:14:31.852 }' 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.852 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-11-20 03:21:21.907133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.422 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.423 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.423 "name": "Existed_Raid", 00:14:32.423 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:32.423 "strip_size_kb": 64, 00:14:32.423 "state": "configuring", 00:14:32.423 "raid_level": "raid5f", 00:14:32.423 "superblock": true, 00:14:32.423 "num_base_bdevs": 3, 00:14:32.423 "num_base_bdevs_discovered": 2, 00:14:32.423 "num_base_bdevs_operational": 3, 00:14:32.423 "base_bdevs_list": [ 00:14:32.423 { 00:14:32.423 "name": "BaseBdev1", 00:14:32.423 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:32.423 "is_configured": true, 00:14:32.423 "data_offset": 2048, 00:14:32.423 "data_size": 63488 00:14:32.423 }, 00:14:32.423 { 00:14:32.423 "name": null, 00:14:32.423 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:32.423 "is_configured": false, 00:14:32.423 "data_offset": 0, 00:14:32.423 "data_size": 63488 00:14:32.423 }, 00:14:32.423 { 00:14:32.423 "name": "BaseBdev3", 00:14:32.423 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:32.423 "is_configured": true, 00:14:32.423 "data_offset": 2048, 00:14:32.423 "data_size": 63488 00:14:32.423 } 00:14:32.423 ] 00:14:32.423 }' 00:14:32.423 03:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.423 03:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.683 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.683 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.683 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.683 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.942 [2024-11-20 03:21:22.362397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.942 "name": "Existed_Raid", 00:14:32.942 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:32.942 "strip_size_kb": 64, 00:14:32.942 "state": "configuring", 00:14:32.942 "raid_level": "raid5f", 00:14:32.942 "superblock": true, 00:14:32.942 "num_base_bdevs": 3, 00:14:32.942 "num_base_bdevs_discovered": 1, 00:14:32.942 "num_base_bdevs_operational": 3, 00:14:32.942 "base_bdevs_list": [ 00:14:32.942 { 00:14:32.942 "name": null, 00:14:32.942 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:32.942 "is_configured": false, 00:14:32.942 "data_offset": 0, 00:14:32.942 "data_size": 63488 00:14:32.942 }, 00:14:32.942 { 00:14:32.942 "name": null, 00:14:32.942 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:32.942 "is_configured": false, 00:14:32.942 "data_offset": 0, 00:14:32.942 "data_size": 63488 00:14:32.942 }, 00:14:32.942 { 00:14:32.942 "name": "BaseBdev3", 00:14:32.942 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:32.942 "is_configured": true, 00:14:32.942 "data_offset": 2048, 00:14:32.942 "data_size": 63488 00:14:32.942 } 00:14:32.942 ] 00:14:32.942 }' 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.942 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 [2024-11-20 03:21:22.955687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.511 03:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.511 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.511 "name": "Existed_Raid", 00:14:33.511 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:33.511 "strip_size_kb": 64, 00:14:33.511 "state": "configuring", 00:14:33.511 "raid_level": "raid5f", 00:14:33.511 "superblock": true, 00:14:33.511 "num_base_bdevs": 3, 00:14:33.511 "num_base_bdevs_discovered": 2, 00:14:33.511 "num_base_bdevs_operational": 3, 00:14:33.511 "base_bdevs_list": [ 00:14:33.511 { 00:14:33.511 "name": null, 00:14:33.511 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:33.511 "is_configured": false, 00:14:33.511 "data_offset": 0, 00:14:33.511 "data_size": 63488 00:14:33.511 }, 00:14:33.511 { 00:14:33.511 "name": "BaseBdev2", 00:14:33.511 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:33.511 "is_configured": true, 00:14:33.511 "data_offset": 2048, 00:14:33.511 "data_size": 63488 00:14:33.511 }, 00:14:33.511 { 00:14:33.511 "name": "BaseBdev3", 00:14:33.511 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:33.511 "is_configured": true, 00:14:33.511 "data_offset": 2048, 00:14:33.511 "data_size": 63488 00:14:33.511 } 00:14:33.511 ] 00:14:33.511 }' 00:14:33.511 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.511 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.082 [2024-11-20 03:21:23.542831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:34.082 [2024-11-20 03:21:23.543047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:34.082 [2024-11-20 03:21:23.543064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.082 [2024-11-20 03:21:23.543302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.082 NewBaseBdev 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.082 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.082 [2024-11-20 03:21:23.548872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:34.083 [2024-11-20 03:21:23.548943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:34.083 [2024-11-20 03:21:23.549142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.083 [ 00:14:34.083 { 00:14:34.083 "name": "NewBaseBdev", 00:14:34.083 "aliases": [ 00:14:34.083 "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b" 00:14:34.083 ], 00:14:34.083 "product_name": "Malloc disk", 00:14:34.083 "block_size": 512, 00:14:34.083 "num_blocks": 65536, 00:14:34.083 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:34.083 "assigned_rate_limits": { 00:14:34.083 "rw_ios_per_sec": 0, 00:14:34.083 "rw_mbytes_per_sec": 0, 00:14:34.083 "r_mbytes_per_sec": 0, 00:14:34.083 "w_mbytes_per_sec": 0 00:14:34.083 }, 00:14:34.083 "claimed": true, 00:14:34.083 "claim_type": "exclusive_write", 00:14:34.083 "zoned": false, 00:14:34.083 "supported_io_types": { 00:14:34.083 "read": true, 00:14:34.083 "write": true, 00:14:34.083 "unmap": true, 00:14:34.083 "flush": true, 00:14:34.083 "reset": true, 00:14:34.083 "nvme_admin": false, 00:14:34.083 "nvme_io": false, 00:14:34.083 "nvme_io_md": false, 00:14:34.083 "write_zeroes": true, 00:14:34.083 "zcopy": true, 00:14:34.083 "get_zone_info": false, 00:14:34.083 "zone_management": false, 00:14:34.083 "zone_append": false, 00:14:34.083 "compare": false, 00:14:34.083 "compare_and_write": false, 00:14:34.083 "abort": true, 00:14:34.083 "seek_hole": false, 00:14:34.083 "seek_data": false, 00:14:34.083 "copy": true, 00:14:34.083 "nvme_iov_md": false 00:14:34.083 }, 00:14:34.083 "memory_domains": [ 00:14:34.083 { 00:14:34.083 "dma_device_id": "system", 00:14:34.083 "dma_device_type": 1 00:14:34.083 }, 00:14:34.083 { 00:14:34.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.083 "dma_device_type": 2 00:14:34.083 } 00:14:34.083 ], 00:14:34.083 "driver_specific": {} 00:14:34.083 } 00:14:34.083 ] 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.083 "name": "Existed_Raid", 00:14:34.083 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:34.083 "strip_size_kb": 64, 00:14:34.083 "state": "online", 00:14:34.083 "raid_level": "raid5f", 00:14:34.083 "superblock": true, 00:14:34.083 "num_base_bdevs": 3, 00:14:34.083 "num_base_bdevs_discovered": 3, 00:14:34.083 "num_base_bdevs_operational": 3, 00:14:34.083 "base_bdevs_list": [ 00:14:34.083 { 00:14:34.083 "name": "NewBaseBdev", 00:14:34.083 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:34.083 "is_configured": true, 00:14:34.083 "data_offset": 2048, 00:14:34.083 "data_size": 63488 00:14:34.083 }, 00:14:34.083 { 00:14:34.083 "name": "BaseBdev2", 00:14:34.083 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:34.083 "is_configured": true, 00:14:34.083 "data_offset": 2048, 00:14:34.083 "data_size": 63488 00:14:34.083 }, 00:14:34.083 { 00:14:34.083 "name": "BaseBdev3", 00:14:34.083 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:34.083 "is_configured": true, 00:14:34.083 "data_offset": 2048, 00:14:34.083 "data_size": 63488 00:14:34.083 } 00:14:34.083 ] 00:14:34.083 }' 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.083 03:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.654 [2024-11-20 03:21:24.038725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.654 "name": "Existed_Raid", 00:14:34.654 "aliases": [ 00:14:34.654 "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea" 00:14:34.654 ], 00:14:34.654 "product_name": "Raid Volume", 00:14:34.654 "block_size": 512, 00:14:34.654 "num_blocks": 126976, 00:14:34.654 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:34.654 "assigned_rate_limits": { 00:14:34.654 "rw_ios_per_sec": 0, 00:14:34.654 "rw_mbytes_per_sec": 0, 00:14:34.654 "r_mbytes_per_sec": 0, 00:14:34.654 "w_mbytes_per_sec": 0 00:14:34.654 }, 00:14:34.654 "claimed": false, 00:14:34.654 "zoned": false, 00:14:34.654 "supported_io_types": { 00:14:34.654 "read": true, 00:14:34.654 "write": true, 00:14:34.654 "unmap": false, 00:14:34.654 "flush": false, 00:14:34.654 "reset": true, 00:14:34.654 "nvme_admin": false, 00:14:34.654 "nvme_io": false, 00:14:34.654 "nvme_io_md": false, 00:14:34.654 "write_zeroes": true, 00:14:34.654 "zcopy": false, 00:14:34.654 "get_zone_info": false, 00:14:34.654 "zone_management": false, 00:14:34.654 "zone_append": false, 00:14:34.654 "compare": false, 00:14:34.654 "compare_and_write": false, 00:14:34.654 "abort": false, 00:14:34.654 "seek_hole": false, 00:14:34.654 "seek_data": false, 00:14:34.654 "copy": false, 00:14:34.654 "nvme_iov_md": false 00:14:34.654 }, 00:14:34.654 "driver_specific": { 00:14:34.654 "raid": { 00:14:34.654 "uuid": "8da0005e-3032-48e4-a2d3-ecba4a5ca9ea", 00:14:34.654 "strip_size_kb": 64, 00:14:34.654 "state": "online", 00:14:34.654 "raid_level": "raid5f", 00:14:34.654 "superblock": true, 00:14:34.654 "num_base_bdevs": 3, 00:14:34.654 "num_base_bdevs_discovered": 3, 00:14:34.654 "num_base_bdevs_operational": 3, 00:14:34.654 "base_bdevs_list": [ 00:14:34.654 { 00:14:34.654 "name": "NewBaseBdev", 00:14:34.654 "uuid": "d27b76e8-3ea0-4571-9e11-4cb1c2a1e84b", 00:14:34.654 "is_configured": true, 00:14:34.654 "data_offset": 2048, 00:14:34.654 "data_size": 63488 00:14:34.654 }, 00:14:34.654 { 00:14:34.654 "name": "BaseBdev2", 00:14:34.654 "uuid": "34c00442-6821-4579-9ad3-e209929c5889", 00:14:34.654 "is_configured": true, 00:14:34.654 "data_offset": 2048, 00:14:34.654 "data_size": 63488 00:14:34.654 }, 00:14:34.654 { 00:14:34.654 "name": "BaseBdev3", 00:14:34.654 "uuid": "0ac22b81-8fb6-4875-b36d-a3ca927aee58", 00:14:34.654 "is_configured": true, 00:14:34.654 "data_offset": 2048, 00:14:34.654 "data_size": 63488 00:14:34.654 } 00:14:34.654 ] 00:14:34.654 } 00:14:34.654 } 00:14:34.654 }' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:34.654 BaseBdev2 00:14:34.654 BaseBdev3' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.914 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.915 [2024-11-20 03:21:24.314014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.915 [2024-11-20 03:21:24.314082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.915 [2024-11-20 03:21:24.314179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.915 [2024-11-20 03:21:24.314489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.915 [2024-11-20 03:21:24.314506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80326 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80326 ']' 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80326 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80326 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.915 killing process with pid 80326 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80326' 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80326 00:14:34.915 [2024-11-20 03:21:24.360731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.915 03:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80326 00:14:35.174 [2024-11-20 03:21:24.656926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.113 03:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:36.113 00:14:36.113 real 0m10.726s 00:14:36.113 user 0m17.167s 00:14:36.113 sys 0m1.883s 00:14:36.113 03:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.113 03:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.113 ************************************ 00:14:36.113 END TEST raid5f_state_function_test_sb 00:14:36.113 ************************************ 00:14:36.374 03:21:25 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:36.374 03:21:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:36.374 03:21:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.374 03:21:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.374 ************************************ 00:14:36.374 START TEST raid5f_superblock_test 00:14:36.374 ************************************ 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80941 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80941 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80941 ']' 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.374 03:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.374 [2024-11-20 03:21:25.881981] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:36.374 [2024-11-20 03:21:25.882172] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80941 ] 00:14:36.633 [2024-11-20 03:21:26.039770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.633 [2024-11-20 03:21:26.157310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.889 [2024-11-20 03:21:26.363674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.889 [2024-11-20 03:21:26.363801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.148 malloc1 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.148 [2024-11-20 03:21:26.756144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.148 [2024-11-20 03:21:26.756207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.148 [2024-11-20 03:21:26.756231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:37.148 [2024-11-20 03:21:26.756240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.148 [2024-11-20 03:21:26.758326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.148 [2024-11-20 03:21:26.758447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.148 pt1 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.148 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.406 malloc2 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.406 [2024-11-20 03:21:26.812176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.406 [2024-11-20 03:21:26.812277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.406 [2024-11-20 03:21:26.812318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:37.406 [2024-11-20 03:21:26.812345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.406 [2024-11-20 03:21:26.814432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.406 [2024-11-20 03:21:26.814519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.406 pt2 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.406 malloc3 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.406 [2024-11-20 03:21:26.882361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:37.406 [2024-11-20 03:21:26.882423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.406 [2024-11-20 03:21:26.882444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.406 [2024-11-20 03:21:26.882453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.406 [2024-11-20 03:21:26.884728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.406 [2024-11-20 03:21:26.884798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:37.406 pt3 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.406 [2024-11-20 03:21:26.894398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.406 [2024-11-20 03:21:26.896239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.406 [2024-11-20 03:21:26.896299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:37.406 [2024-11-20 03:21:26.896454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.406 [2024-11-20 03:21:26.896471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.406 [2024-11-20 03:21:26.896723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:37.406 [2024-11-20 03:21:26.902015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.406 [2024-11-20 03:21:26.902034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.406 [2024-11-20 03:21:26.902221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.406 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.407 "name": "raid_bdev1", 00:14:37.407 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:37.407 "strip_size_kb": 64, 00:14:37.407 "state": "online", 00:14:37.407 "raid_level": "raid5f", 00:14:37.407 "superblock": true, 00:14:37.407 "num_base_bdevs": 3, 00:14:37.407 "num_base_bdevs_discovered": 3, 00:14:37.407 "num_base_bdevs_operational": 3, 00:14:37.407 "base_bdevs_list": [ 00:14:37.407 { 00:14:37.407 "name": "pt1", 00:14:37.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.407 "is_configured": true, 00:14:37.407 "data_offset": 2048, 00:14:37.407 "data_size": 63488 00:14:37.407 }, 00:14:37.407 { 00:14:37.407 "name": "pt2", 00:14:37.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.407 "is_configured": true, 00:14:37.407 "data_offset": 2048, 00:14:37.407 "data_size": 63488 00:14:37.407 }, 00:14:37.407 { 00:14:37.407 "name": "pt3", 00:14:37.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.407 "is_configured": true, 00:14:37.407 "data_offset": 2048, 00:14:37.407 "data_size": 63488 00:14:37.407 } 00:14:37.407 ] 00:14:37.407 }' 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.407 03:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.974 [2024-11-20 03:21:27.356144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.974 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.975 "name": "raid_bdev1", 00:14:37.975 "aliases": [ 00:14:37.975 "92f15669-7239-4137-9c10-f94478f3cc59" 00:14:37.975 ], 00:14:37.975 "product_name": "Raid Volume", 00:14:37.975 "block_size": 512, 00:14:37.975 "num_blocks": 126976, 00:14:37.975 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:37.975 "assigned_rate_limits": { 00:14:37.975 "rw_ios_per_sec": 0, 00:14:37.975 "rw_mbytes_per_sec": 0, 00:14:37.975 "r_mbytes_per_sec": 0, 00:14:37.975 "w_mbytes_per_sec": 0 00:14:37.975 }, 00:14:37.975 "claimed": false, 00:14:37.975 "zoned": false, 00:14:37.975 "supported_io_types": { 00:14:37.975 "read": true, 00:14:37.975 "write": true, 00:14:37.975 "unmap": false, 00:14:37.975 "flush": false, 00:14:37.975 "reset": true, 00:14:37.975 "nvme_admin": false, 00:14:37.975 "nvme_io": false, 00:14:37.975 "nvme_io_md": false, 00:14:37.975 "write_zeroes": true, 00:14:37.975 "zcopy": false, 00:14:37.975 "get_zone_info": false, 00:14:37.975 "zone_management": false, 00:14:37.975 "zone_append": false, 00:14:37.975 "compare": false, 00:14:37.975 "compare_and_write": false, 00:14:37.975 "abort": false, 00:14:37.975 "seek_hole": false, 00:14:37.975 "seek_data": false, 00:14:37.975 "copy": false, 00:14:37.975 "nvme_iov_md": false 00:14:37.975 }, 00:14:37.975 "driver_specific": { 00:14:37.975 "raid": { 00:14:37.975 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:37.975 "strip_size_kb": 64, 00:14:37.975 "state": "online", 00:14:37.975 "raid_level": "raid5f", 00:14:37.975 "superblock": true, 00:14:37.975 "num_base_bdevs": 3, 00:14:37.975 "num_base_bdevs_discovered": 3, 00:14:37.975 "num_base_bdevs_operational": 3, 00:14:37.975 "base_bdevs_list": [ 00:14:37.975 { 00:14:37.975 "name": "pt1", 00:14:37.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.975 "is_configured": true, 00:14:37.975 "data_offset": 2048, 00:14:37.975 "data_size": 63488 00:14:37.975 }, 00:14:37.975 { 00:14:37.975 "name": "pt2", 00:14:37.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.975 "is_configured": true, 00:14:37.975 "data_offset": 2048, 00:14:37.975 "data_size": 63488 00:14:37.975 }, 00:14:37.975 { 00:14:37.975 "name": "pt3", 00:14:37.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.975 "is_configured": true, 00:14:37.975 "data_offset": 2048, 00:14:37.975 "data_size": 63488 00:14:37.975 } 00:14:37.975 ] 00:14:37.975 } 00:14:37.975 } 00:14:37.975 }' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:37.975 pt2 00:14:37.975 pt3' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.975 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.234 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.234 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.234 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.234 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.234 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.234 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:38.235 [2024-11-20 03:21:27.635683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92f15669-7239-4137-9c10-f94478f3cc59 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92f15669-7239-4137-9c10-f94478f3cc59 ']' 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 [2024-11-20 03:21:27.683364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.235 [2024-11-20 03:21:27.683393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.235 [2024-11-20 03:21:27.683475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.235 [2024-11-20 03:21:27.683550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.235 [2024-11-20 03:21:27.683560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 [2024-11-20 03:21:27.827173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:38.235 [2024-11-20 03:21:27.829066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:38.235 [2024-11-20 03:21:27.829176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:38.235 [2024-11-20 03:21:27.829246] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:38.235 [2024-11-20 03:21:27.829344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:38.235 [2024-11-20 03:21:27.829415] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:38.235 [2024-11-20 03:21:27.829467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.235 [2024-11-20 03:21:27.829515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:38.235 request: 00:14:38.235 { 00:14:38.235 "name": "raid_bdev1", 00:14:38.235 "raid_level": "raid5f", 00:14:38.235 "base_bdevs": [ 00:14:38.235 "malloc1", 00:14:38.235 "malloc2", 00:14:38.235 "malloc3" 00:14:38.235 ], 00:14:38.235 "strip_size_kb": 64, 00:14:38.235 "superblock": false, 00:14:38.235 "method": "bdev_raid_create", 00:14:38.235 "req_id": 1 00:14:38.235 } 00:14:38.235 Got JSON-RPC error response 00:14:38.235 response: 00:14:38.235 { 00:14:38.235 "code": -17, 00:14:38.235 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:38.235 } 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:38.235 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.495 [2024-11-20 03:21:27.894993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.495 [2024-11-20 03:21:27.895054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.495 [2024-11-20 03:21:27.895073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:38.495 [2024-11-20 03:21:27.895082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.495 [2024-11-20 03:21:27.897202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.495 [2024-11-20 03:21:27.897252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.495 [2024-11-20 03:21:27.897331] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:38.495 [2024-11-20 03:21:27.897378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.495 pt1 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.495 "name": "raid_bdev1", 00:14:38.495 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:38.495 "strip_size_kb": 64, 00:14:38.495 "state": "configuring", 00:14:38.495 "raid_level": "raid5f", 00:14:38.495 "superblock": true, 00:14:38.495 "num_base_bdevs": 3, 00:14:38.495 "num_base_bdevs_discovered": 1, 00:14:38.495 "num_base_bdevs_operational": 3, 00:14:38.495 "base_bdevs_list": [ 00:14:38.495 { 00:14:38.495 "name": "pt1", 00:14:38.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.495 "is_configured": true, 00:14:38.495 "data_offset": 2048, 00:14:38.495 "data_size": 63488 00:14:38.495 }, 00:14:38.495 { 00:14:38.495 "name": null, 00:14:38.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.495 "is_configured": false, 00:14:38.495 "data_offset": 2048, 00:14:38.495 "data_size": 63488 00:14:38.495 }, 00:14:38.495 { 00:14:38.495 "name": null, 00:14:38.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.495 "is_configured": false, 00:14:38.495 "data_offset": 2048, 00:14:38.495 "data_size": 63488 00:14:38.495 } 00:14:38.495 ] 00:14:38.495 }' 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.495 03:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.754 [2024-11-20 03:21:28.358230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.754 [2024-11-20 03:21:28.358296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.754 [2024-11-20 03:21:28.358318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:38.754 [2024-11-20 03:21:28.358328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.754 [2024-11-20 03:21:28.358803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.754 [2024-11-20 03:21:28.358828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.754 [2024-11-20 03:21:28.358914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.754 [2024-11-20 03:21:28.358941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.754 pt2 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.754 [2024-11-20 03:21:28.370211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.754 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.013 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.013 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.013 "name": "raid_bdev1", 00:14:39.013 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:39.013 "strip_size_kb": 64, 00:14:39.013 "state": "configuring", 00:14:39.013 "raid_level": "raid5f", 00:14:39.013 "superblock": true, 00:14:39.013 "num_base_bdevs": 3, 00:14:39.013 "num_base_bdevs_discovered": 1, 00:14:39.013 "num_base_bdevs_operational": 3, 00:14:39.013 "base_bdevs_list": [ 00:14:39.013 { 00:14:39.013 "name": "pt1", 00:14:39.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.013 "is_configured": true, 00:14:39.013 "data_offset": 2048, 00:14:39.013 "data_size": 63488 00:14:39.013 }, 00:14:39.013 { 00:14:39.013 "name": null, 00:14:39.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.013 "is_configured": false, 00:14:39.013 "data_offset": 0, 00:14:39.013 "data_size": 63488 00:14:39.013 }, 00:14:39.013 { 00:14:39.013 "name": null, 00:14:39.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.013 "is_configured": false, 00:14:39.013 "data_offset": 2048, 00:14:39.013 "data_size": 63488 00:14:39.013 } 00:14:39.013 ] 00:14:39.013 }' 00:14:39.013 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.013 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.274 [2024-11-20 03:21:28.749549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.274 [2024-11-20 03:21:28.749690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.274 [2024-11-20 03:21:28.749736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:39.274 [2024-11-20 03:21:28.749772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.274 [2024-11-20 03:21:28.750240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.274 [2024-11-20 03:21:28.750305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.274 [2024-11-20 03:21:28.750413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:39.274 [2024-11-20 03:21:28.750490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.274 pt2 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.274 [2024-11-20 03:21:28.761501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.274 [2024-11-20 03:21:28.761599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.274 [2024-11-20 03:21:28.761639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:39.274 [2024-11-20 03:21:28.761669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.274 [2024-11-20 03:21:28.762046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.274 [2024-11-20 03:21:28.762111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.274 [2024-11-20 03:21:28.762199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:39.274 [2024-11-20 03:21:28.762246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.274 [2024-11-20 03:21:28.762395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.274 [2024-11-20 03:21:28.762442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.274 [2024-11-20 03:21:28.762756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:39.274 [2024-11-20 03:21:28.768114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.274 [2024-11-20 03:21:28.768164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:39.274 [2024-11-20 03:21:28.768378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.274 pt3 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.274 "name": "raid_bdev1", 00:14:39.274 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:39.274 "strip_size_kb": 64, 00:14:39.274 "state": "online", 00:14:39.274 "raid_level": "raid5f", 00:14:39.274 "superblock": true, 00:14:39.274 "num_base_bdevs": 3, 00:14:39.274 "num_base_bdevs_discovered": 3, 00:14:39.274 "num_base_bdevs_operational": 3, 00:14:39.274 "base_bdevs_list": [ 00:14:39.274 { 00:14:39.274 "name": "pt1", 00:14:39.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.274 "is_configured": true, 00:14:39.274 "data_offset": 2048, 00:14:39.274 "data_size": 63488 00:14:39.274 }, 00:14:39.274 { 00:14:39.274 "name": "pt2", 00:14:39.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.274 "is_configured": true, 00:14:39.274 "data_offset": 2048, 00:14:39.274 "data_size": 63488 00:14:39.274 }, 00:14:39.274 { 00:14:39.274 "name": "pt3", 00:14:39.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.274 "is_configured": true, 00:14:39.274 "data_offset": 2048, 00:14:39.274 "data_size": 63488 00:14:39.274 } 00:14:39.274 ] 00:14:39.274 }' 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.274 03:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:39.840 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:39.840 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.840 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 [2024-11-20 03:21:29.198609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.841 "name": "raid_bdev1", 00:14:39.841 "aliases": [ 00:14:39.841 "92f15669-7239-4137-9c10-f94478f3cc59" 00:14:39.841 ], 00:14:39.841 "product_name": "Raid Volume", 00:14:39.841 "block_size": 512, 00:14:39.841 "num_blocks": 126976, 00:14:39.841 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:39.841 "assigned_rate_limits": { 00:14:39.841 "rw_ios_per_sec": 0, 00:14:39.841 "rw_mbytes_per_sec": 0, 00:14:39.841 "r_mbytes_per_sec": 0, 00:14:39.841 "w_mbytes_per_sec": 0 00:14:39.841 }, 00:14:39.841 "claimed": false, 00:14:39.841 "zoned": false, 00:14:39.841 "supported_io_types": { 00:14:39.841 "read": true, 00:14:39.841 "write": true, 00:14:39.841 "unmap": false, 00:14:39.841 "flush": false, 00:14:39.841 "reset": true, 00:14:39.841 "nvme_admin": false, 00:14:39.841 "nvme_io": false, 00:14:39.841 "nvme_io_md": false, 00:14:39.841 "write_zeroes": true, 00:14:39.841 "zcopy": false, 00:14:39.841 "get_zone_info": false, 00:14:39.841 "zone_management": false, 00:14:39.841 "zone_append": false, 00:14:39.841 "compare": false, 00:14:39.841 "compare_and_write": false, 00:14:39.841 "abort": false, 00:14:39.841 "seek_hole": false, 00:14:39.841 "seek_data": false, 00:14:39.841 "copy": false, 00:14:39.841 "nvme_iov_md": false 00:14:39.841 }, 00:14:39.841 "driver_specific": { 00:14:39.841 "raid": { 00:14:39.841 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:39.841 "strip_size_kb": 64, 00:14:39.841 "state": "online", 00:14:39.841 "raid_level": "raid5f", 00:14:39.841 "superblock": true, 00:14:39.841 "num_base_bdevs": 3, 00:14:39.841 "num_base_bdevs_discovered": 3, 00:14:39.841 "num_base_bdevs_operational": 3, 00:14:39.841 "base_bdevs_list": [ 00:14:39.841 { 00:14:39.841 "name": "pt1", 00:14:39.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.841 "is_configured": true, 00:14:39.841 "data_offset": 2048, 00:14:39.841 "data_size": 63488 00:14:39.841 }, 00:14:39.841 { 00:14:39.841 "name": "pt2", 00:14:39.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.841 "is_configured": true, 00:14:39.841 "data_offset": 2048, 00:14:39.841 "data_size": 63488 00:14:39.841 }, 00:14:39.841 { 00:14:39.841 "name": "pt3", 00:14:39.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.841 "is_configured": true, 00:14:39.841 "data_offset": 2048, 00:14:39.841 "data_size": 63488 00:14:39.841 } 00:14:39.841 ] 00:14:39.841 } 00:14:39.841 } 00:14:39.841 }' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:39.841 pt2 00:14:39.841 pt3' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:39.841 [2024-11-20 03:21:29.454098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.841 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92f15669-7239-4137-9c10-f94478f3cc59 '!=' 92f15669-7239-4137-9c10-f94478f3cc59 ']' 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 [2024-11-20 03:21:29.501885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.100 "name": "raid_bdev1", 00:14:40.100 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:40.100 "strip_size_kb": 64, 00:14:40.100 "state": "online", 00:14:40.100 "raid_level": "raid5f", 00:14:40.100 "superblock": true, 00:14:40.100 "num_base_bdevs": 3, 00:14:40.100 "num_base_bdevs_discovered": 2, 00:14:40.100 "num_base_bdevs_operational": 2, 00:14:40.100 "base_bdevs_list": [ 00:14:40.100 { 00:14:40.100 "name": null, 00:14:40.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.100 "is_configured": false, 00:14:40.100 "data_offset": 0, 00:14:40.100 "data_size": 63488 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "name": "pt2", 00:14:40.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.100 "is_configured": true, 00:14:40.100 "data_offset": 2048, 00:14:40.100 "data_size": 63488 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "name": "pt3", 00:14:40.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.100 "is_configured": true, 00:14:40.100 "data_offset": 2048, 00:14:40.100 "data_size": 63488 00:14:40.100 } 00:14:40.100 ] 00:14:40.100 }' 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.100 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.359 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.359 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.359 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.360 [2024-11-20 03:21:29.901145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.360 [2024-11-20 03:21:29.901173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.360 [2024-11-20 03:21:29.901255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.360 [2024-11-20 03:21:29.901310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.360 [2024-11-20 03:21:29.901325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.360 [2024-11-20 03:21:29.980976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.360 [2024-11-20 03:21:29.981088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.360 [2024-11-20 03:21:29.981122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:40.360 [2024-11-20 03:21:29.981151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.360 [2024-11-20 03:21:29.983295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.360 [2024-11-20 03:21:29.983368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.360 [2024-11-20 03:21:29.983461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.360 [2024-11-20 03:21:29.983540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.360 pt2 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.360 03:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.619 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.619 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.619 "name": "raid_bdev1", 00:14:40.619 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:40.619 "strip_size_kb": 64, 00:14:40.619 "state": "configuring", 00:14:40.619 "raid_level": "raid5f", 00:14:40.619 "superblock": true, 00:14:40.619 "num_base_bdevs": 3, 00:14:40.619 "num_base_bdevs_discovered": 1, 00:14:40.619 "num_base_bdevs_operational": 2, 00:14:40.619 "base_bdevs_list": [ 00:14:40.619 { 00:14:40.619 "name": null, 00:14:40.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.619 "is_configured": false, 00:14:40.619 "data_offset": 2048, 00:14:40.619 "data_size": 63488 00:14:40.619 }, 00:14:40.619 { 00:14:40.619 "name": "pt2", 00:14:40.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.619 "is_configured": true, 00:14:40.619 "data_offset": 2048, 00:14:40.619 "data_size": 63488 00:14:40.619 }, 00:14:40.619 { 00:14:40.619 "name": null, 00:14:40.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.620 "is_configured": false, 00:14:40.620 "data_offset": 2048, 00:14:40.620 "data_size": 63488 00:14:40.620 } 00:14:40.620 ] 00:14:40.620 }' 00:14:40.620 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.620 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.879 [2024-11-20 03:21:30.404282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.879 [2024-11-20 03:21:30.404354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.879 [2024-11-20 03:21:30.404375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:40.879 [2024-11-20 03:21:30.404386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.879 [2024-11-20 03:21:30.404862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.879 [2024-11-20 03:21:30.404885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.879 [2024-11-20 03:21:30.404968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.879 [2024-11-20 03:21:30.405000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.879 [2024-11-20 03:21:30.405136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:40.879 [2024-11-20 03:21:30.405153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:40.879 [2024-11-20 03:21:30.405398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:40.879 [2024-11-20 03:21:30.410757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:40.879 [2024-11-20 03:21:30.410777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:40.879 [2024-11-20 03:21:30.411087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.879 pt3 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.879 "name": "raid_bdev1", 00:14:40.879 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:40.879 "strip_size_kb": 64, 00:14:40.879 "state": "online", 00:14:40.879 "raid_level": "raid5f", 00:14:40.879 "superblock": true, 00:14:40.879 "num_base_bdevs": 3, 00:14:40.879 "num_base_bdevs_discovered": 2, 00:14:40.879 "num_base_bdevs_operational": 2, 00:14:40.879 "base_bdevs_list": [ 00:14:40.879 { 00:14:40.879 "name": null, 00:14:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.879 "is_configured": false, 00:14:40.879 "data_offset": 2048, 00:14:40.879 "data_size": 63488 00:14:40.879 }, 00:14:40.879 { 00:14:40.879 "name": "pt2", 00:14:40.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.879 "is_configured": true, 00:14:40.879 "data_offset": 2048, 00:14:40.879 "data_size": 63488 00:14:40.879 }, 00:14:40.879 { 00:14:40.879 "name": "pt3", 00:14:40.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.879 "is_configured": true, 00:14:40.879 "data_offset": 2048, 00:14:40.879 "data_size": 63488 00:14:40.879 } 00:14:40.879 ] 00:14:40.879 }' 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.879 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.449 [2024-11-20 03:21:30.881862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.449 [2024-11-20 03:21:30.881938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.449 [2024-11-20 03:21:30.882035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.449 [2024-11-20 03:21:30.882131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.449 [2024-11-20 03:21:30.882170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.449 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.450 [2024-11-20 03:21:30.949758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.450 [2024-11-20 03:21:30.949818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.450 [2024-11-20 03:21:30.949837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:41.450 [2024-11-20 03:21:30.949845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.450 [2024-11-20 03:21:30.952234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.450 [2024-11-20 03:21:30.952325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.450 [2024-11-20 03:21:30.952420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.450 [2024-11-20 03:21:30.952468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.450 [2024-11-20 03:21:30.952611] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:41.450 [2024-11-20 03:21:30.952623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.450 [2024-11-20 03:21:30.952661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:41.450 [2024-11-20 03:21:30.952731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.450 pt1 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.450 03:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.450 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.450 "name": "raid_bdev1", 00:14:41.450 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:41.450 "strip_size_kb": 64, 00:14:41.450 "state": "configuring", 00:14:41.450 "raid_level": "raid5f", 00:14:41.450 "superblock": true, 00:14:41.450 "num_base_bdevs": 3, 00:14:41.450 "num_base_bdevs_discovered": 1, 00:14:41.450 "num_base_bdevs_operational": 2, 00:14:41.450 "base_bdevs_list": [ 00:14:41.450 { 00:14:41.450 "name": null, 00:14:41.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.450 "is_configured": false, 00:14:41.450 "data_offset": 2048, 00:14:41.450 "data_size": 63488 00:14:41.450 }, 00:14:41.450 { 00:14:41.450 "name": "pt2", 00:14:41.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.450 "is_configured": true, 00:14:41.450 "data_offset": 2048, 00:14:41.450 "data_size": 63488 00:14:41.450 }, 00:14:41.450 { 00:14:41.450 "name": null, 00:14:41.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.450 "is_configured": false, 00:14:41.450 "data_offset": 2048, 00:14:41.450 "data_size": 63488 00:14:41.450 } 00:14:41.450 ] 00:14:41.450 }' 00:14:41.450 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.450 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.021 [2024-11-20 03:21:31.397008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.021 [2024-11-20 03:21:31.397132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.021 [2024-11-20 03:21:31.397175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:42.021 [2024-11-20 03:21:31.397208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.021 [2024-11-20 03:21:31.397761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.021 [2024-11-20 03:21:31.397824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.021 [2024-11-20 03:21:31.397943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.021 [2024-11-20 03:21:31.397998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.021 [2024-11-20 03:21:31.398160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:42.021 [2024-11-20 03:21:31.398199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.021 [2024-11-20 03:21:31.398499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:42.021 [2024-11-20 03:21:31.404538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:42.021 [2024-11-20 03:21:31.404597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:42.021 [2024-11-20 03:21:31.404928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.021 pt3 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.021 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.021 "name": "raid_bdev1", 00:14:42.021 "uuid": "92f15669-7239-4137-9c10-f94478f3cc59", 00:14:42.021 "strip_size_kb": 64, 00:14:42.021 "state": "online", 00:14:42.021 "raid_level": "raid5f", 00:14:42.021 "superblock": true, 00:14:42.021 "num_base_bdevs": 3, 00:14:42.021 "num_base_bdevs_discovered": 2, 00:14:42.022 "num_base_bdevs_operational": 2, 00:14:42.022 "base_bdevs_list": [ 00:14:42.022 { 00:14:42.022 "name": null, 00:14:42.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.022 "is_configured": false, 00:14:42.022 "data_offset": 2048, 00:14:42.022 "data_size": 63488 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "name": "pt2", 00:14:42.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.022 "is_configured": true, 00:14:42.022 "data_offset": 2048, 00:14:42.022 "data_size": 63488 00:14:42.022 }, 00:14:42.022 { 00:14:42.022 "name": "pt3", 00:14:42.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.022 "is_configured": true, 00:14:42.022 "data_offset": 2048, 00:14:42.022 "data_size": 63488 00:14:42.022 } 00:14:42.022 ] 00:14:42.022 }' 00:14:42.022 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.022 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.282 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.282 [2024-11-20 03:21:31.911529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 92f15669-7239-4137-9c10-f94478f3cc59 '!=' 92f15669-7239-4137-9c10-f94478f3cc59 ']' 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80941 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80941 ']' 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80941 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80941 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80941' 00:14:42.542 killing process with pid 80941 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80941 00:14:42.542 [2024-11-20 03:21:31.989868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.542 03:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80941 00:14:42.542 [2024-11-20 03:21:31.990025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.542 [2024-11-20 03:21:31.990122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.542 [2024-11-20 03:21:31.990177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:42.802 [2024-11-20 03:21:32.290628] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.182 ************************************ 00:14:44.182 END TEST raid5f_superblock_test 00:14:44.182 ************************************ 00:14:44.182 03:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:44.182 00:14:44.182 real 0m7.584s 00:14:44.182 user 0m11.919s 00:14:44.182 sys 0m1.274s 00:14:44.182 03:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.182 03:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.182 03:21:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:44.182 03:21:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:44.182 03:21:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:44.182 03:21:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.182 03:21:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.182 ************************************ 00:14:44.182 START TEST raid5f_rebuild_test 00:14:44.182 ************************************ 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:44.182 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:44.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81385 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81385 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81385 ']' 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.183 03:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:44.183 [2024-11-20 03:21:33.541779] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:44.183 [2024-11-20 03:21:33.541995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.183 Zero copy mechanism will not be used. 00:14:44.183 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81385 ] 00:14:44.183 [2024-11-20 03:21:33.716175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.443 [2024-11-20 03:21:33.829993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.443 [2024-11-20 03:21:34.033042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.443 [2024-11-20 03:21:34.033135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 BaseBdev1_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 [2024-11-20 03:21:34.405212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:45.029 [2024-11-20 03:21:34.405281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.029 [2024-11-20 03:21:34.405306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:45.029 [2024-11-20 03:21:34.405317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.029 [2024-11-20 03:21:34.407369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.029 [2024-11-20 03:21:34.407499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:45.029 BaseBdev1 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 BaseBdev2_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 [2024-11-20 03:21:34.459488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:45.029 [2024-11-20 03:21:34.459550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.029 [2024-11-20 03:21:34.459570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:45.029 [2024-11-20 03:21:34.459584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.029 [2024-11-20 03:21:34.461642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.029 [2024-11-20 03:21:34.461674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:45.029 BaseBdev2 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 BaseBdev3_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 [2024-11-20 03:21:34.527275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:45.029 [2024-11-20 03:21:34.527333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.029 [2024-11-20 03:21:34.527355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:45.029 [2024-11-20 03:21:34.527366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.029 [2024-11-20 03:21:34.529425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.029 [2024-11-20 03:21:34.529463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:45.029 BaseBdev3 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 spare_malloc 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 spare_delay 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 [2024-11-20 03:21:34.592980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.029 [2024-11-20 03:21:34.593034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.029 [2024-11-20 03:21:34.593052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:45.029 [2024-11-20 03:21:34.593062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.029 [2024-11-20 03:21:34.595211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.029 [2024-11-20 03:21:34.595256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.029 spare 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 [2024-11-20 03:21:34.605034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.029 [2024-11-20 03:21:34.606826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.029 [2024-11-20 03:21:34.606907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.029 [2024-11-20 03:21:34.607017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:45.029 [2024-11-20 03:21:34.607031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:45.029 [2024-11-20 03:21:34.607332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:45.029 [2024-11-20 03:21:34.613091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:45.029 [2024-11-20 03:21:34.613112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:45.029 [2024-11-20 03:21:34.613292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.300 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.300 "name": "raid_bdev1", 00:14:45.300 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:45.300 "strip_size_kb": 64, 00:14:45.300 "state": "online", 00:14:45.300 "raid_level": "raid5f", 00:14:45.300 "superblock": false, 00:14:45.300 "num_base_bdevs": 3, 00:14:45.300 "num_base_bdevs_discovered": 3, 00:14:45.300 "num_base_bdevs_operational": 3, 00:14:45.300 "base_bdevs_list": [ 00:14:45.300 { 00:14:45.300 "name": "BaseBdev1", 00:14:45.300 "uuid": "4ff91b47-a953-5482-911c-d593312b678f", 00:14:45.300 "is_configured": true, 00:14:45.300 "data_offset": 0, 00:14:45.300 "data_size": 65536 00:14:45.300 }, 00:14:45.300 { 00:14:45.300 "name": "BaseBdev2", 00:14:45.300 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:45.300 "is_configured": true, 00:14:45.300 "data_offset": 0, 00:14:45.300 "data_size": 65536 00:14:45.300 }, 00:14:45.300 { 00:14:45.300 "name": "BaseBdev3", 00:14:45.300 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:45.300 "is_configured": true, 00:14:45.300 "data_offset": 0, 00:14:45.300 "data_size": 65536 00:14:45.300 } 00:14:45.300 ] 00:14:45.300 }' 00:14:45.300 03:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.300 03:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.559 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.559 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.559 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.559 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.559 [2024-11-20 03:21:35.091377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.559 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.560 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:45.820 [2024-11-20 03:21:35.354731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:45.820 /dev/nbd0 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.820 1+0 records in 00:14:45.820 1+0 records out 00:14:45.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201127 s, 20.4 MB/s 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:45.820 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:46.391 512+0 records in 00:14:46.391 512+0 records out 00:14:46.391 67108864 bytes (67 MB, 64 MiB) copied, 0.361261 s, 186 MB/s 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.391 03:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.391 [2024-11-20 03:21:35.976216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.391 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.391 [2024-11-20 03:21:36.020606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.651 "name": "raid_bdev1", 00:14:46.651 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:46.651 "strip_size_kb": 64, 00:14:46.651 "state": "online", 00:14:46.651 "raid_level": "raid5f", 00:14:46.651 "superblock": false, 00:14:46.651 "num_base_bdevs": 3, 00:14:46.651 "num_base_bdevs_discovered": 2, 00:14:46.651 "num_base_bdevs_operational": 2, 00:14:46.651 "base_bdevs_list": [ 00:14:46.651 { 00:14:46.651 "name": null, 00:14:46.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.651 "is_configured": false, 00:14:46.651 "data_offset": 0, 00:14:46.651 "data_size": 65536 00:14:46.651 }, 00:14:46.651 { 00:14:46.651 "name": "BaseBdev2", 00:14:46.651 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:46.651 "is_configured": true, 00:14:46.651 "data_offset": 0, 00:14:46.651 "data_size": 65536 00:14:46.651 }, 00:14:46.651 { 00:14:46.651 "name": "BaseBdev3", 00:14:46.651 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:46.651 "is_configured": true, 00:14:46.651 "data_offset": 0, 00:14:46.651 "data_size": 65536 00:14:46.651 } 00:14:46.651 ] 00:14:46.651 }' 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.651 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.912 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.912 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.912 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.912 [2024-11-20 03:21:36.463852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.912 [2024-11-20 03:21:36.480203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:46.912 03:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.912 03:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:46.912 [2024-11-20 03:21:36.489561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.294 "name": "raid_bdev1", 00:14:48.294 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:48.294 "strip_size_kb": 64, 00:14:48.294 "state": "online", 00:14:48.294 "raid_level": "raid5f", 00:14:48.294 "superblock": false, 00:14:48.294 "num_base_bdevs": 3, 00:14:48.294 "num_base_bdevs_discovered": 3, 00:14:48.294 "num_base_bdevs_operational": 3, 00:14:48.294 "process": { 00:14:48.294 "type": "rebuild", 00:14:48.294 "target": "spare", 00:14:48.294 "progress": { 00:14:48.294 "blocks": 20480, 00:14:48.294 "percent": 15 00:14:48.294 } 00:14:48.294 }, 00:14:48.294 "base_bdevs_list": [ 00:14:48.294 { 00:14:48.294 "name": "spare", 00:14:48.294 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:48.294 "is_configured": true, 00:14:48.294 "data_offset": 0, 00:14:48.294 "data_size": 65536 00:14:48.294 }, 00:14:48.294 { 00:14:48.294 "name": "BaseBdev2", 00:14:48.294 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:48.294 "is_configured": true, 00:14:48.294 "data_offset": 0, 00:14:48.294 "data_size": 65536 00:14:48.294 }, 00:14:48.294 { 00:14:48.294 "name": "BaseBdev3", 00:14:48.294 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:48.294 "is_configured": true, 00:14:48.294 "data_offset": 0, 00:14:48.294 "data_size": 65536 00:14:48.294 } 00:14:48.294 ] 00:14:48.294 }' 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.294 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.295 [2024-11-20 03:21:37.620451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.295 [2024-11-20 03:21:37.698004] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.295 [2024-11-20 03:21:37.698137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.295 [2024-11-20 03:21:37.698183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.295 [2024-11-20 03:21:37.698206] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.295 "name": "raid_bdev1", 00:14:48.295 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:48.295 "strip_size_kb": 64, 00:14:48.295 "state": "online", 00:14:48.295 "raid_level": "raid5f", 00:14:48.295 "superblock": false, 00:14:48.295 "num_base_bdevs": 3, 00:14:48.295 "num_base_bdevs_discovered": 2, 00:14:48.295 "num_base_bdevs_operational": 2, 00:14:48.295 "base_bdevs_list": [ 00:14:48.295 { 00:14:48.295 "name": null, 00:14:48.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.295 "is_configured": false, 00:14:48.295 "data_offset": 0, 00:14:48.295 "data_size": 65536 00:14:48.295 }, 00:14:48.295 { 00:14:48.295 "name": "BaseBdev2", 00:14:48.295 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:48.295 "is_configured": true, 00:14:48.295 "data_offset": 0, 00:14:48.295 "data_size": 65536 00:14:48.295 }, 00:14:48.295 { 00:14:48.295 "name": "BaseBdev3", 00:14:48.295 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:48.295 "is_configured": true, 00:14:48.295 "data_offset": 0, 00:14:48.295 "data_size": 65536 00:14:48.295 } 00:14:48.295 ] 00:14:48.295 }' 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.295 03:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.555 "name": "raid_bdev1", 00:14:48.555 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:48.555 "strip_size_kb": 64, 00:14:48.555 "state": "online", 00:14:48.555 "raid_level": "raid5f", 00:14:48.555 "superblock": false, 00:14:48.555 "num_base_bdevs": 3, 00:14:48.555 "num_base_bdevs_discovered": 2, 00:14:48.555 "num_base_bdevs_operational": 2, 00:14:48.555 "base_bdevs_list": [ 00:14:48.555 { 00:14:48.555 "name": null, 00:14:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.555 "is_configured": false, 00:14:48.555 "data_offset": 0, 00:14:48.555 "data_size": 65536 00:14:48.555 }, 00:14:48.555 { 00:14:48.555 "name": "BaseBdev2", 00:14:48.555 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:48.555 "is_configured": true, 00:14:48.555 "data_offset": 0, 00:14:48.555 "data_size": 65536 00:14:48.555 }, 00:14:48.555 { 00:14:48.555 "name": "BaseBdev3", 00:14:48.555 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:48.555 "is_configured": true, 00:14:48.555 "data_offset": 0, 00:14:48.555 "data_size": 65536 00:14:48.555 } 00:14:48.555 ] 00:14:48.555 }' 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.555 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.815 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.815 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.815 03:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.815 03:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.815 [2024-11-20 03:21:38.205247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.815 [2024-11-20 03:21:38.221112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:48.815 03:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.815 03:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:48.815 [2024-11-20 03:21:38.228788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.754 "name": "raid_bdev1", 00:14:49.754 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:49.754 "strip_size_kb": 64, 00:14:49.754 "state": "online", 00:14:49.754 "raid_level": "raid5f", 00:14:49.754 "superblock": false, 00:14:49.754 "num_base_bdevs": 3, 00:14:49.754 "num_base_bdevs_discovered": 3, 00:14:49.754 "num_base_bdevs_operational": 3, 00:14:49.754 "process": { 00:14:49.754 "type": "rebuild", 00:14:49.754 "target": "spare", 00:14:49.754 "progress": { 00:14:49.754 "blocks": 20480, 00:14:49.754 "percent": 15 00:14:49.754 } 00:14:49.754 }, 00:14:49.754 "base_bdevs_list": [ 00:14:49.754 { 00:14:49.754 "name": "spare", 00:14:49.754 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:49.754 "is_configured": true, 00:14:49.754 "data_offset": 0, 00:14:49.754 "data_size": 65536 00:14:49.754 }, 00:14:49.754 { 00:14:49.754 "name": "BaseBdev2", 00:14:49.754 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:49.754 "is_configured": true, 00:14:49.754 "data_offset": 0, 00:14:49.754 "data_size": 65536 00:14:49.754 }, 00:14:49.754 { 00:14:49.754 "name": "BaseBdev3", 00:14:49.754 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:49.754 "is_configured": true, 00:14:49.754 "data_offset": 0, 00:14:49.754 "data_size": 65536 00:14:49.754 } 00:14:49.754 ] 00:14:49.754 }' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=543 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.754 03:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.014 "name": "raid_bdev1", 00:14:50.014 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:50.014 "strip_size_kb": 64, 00:14:50.014 "state": "online", 00:14:50.014 "raid_level": "raid5f", 00:14:50.014 "superblock": false, 00:14:50.014 "num_base_bdevs": 3, 00:14:50.014 "num_base_bdevs_discovered": 3, 00:14:50.014 "num_base_bdevs_operational": 3, 00:14:50.014 "process": { 00:14:50.014 "type": "rebuild", 00:14:50.014 "target": "spare", 00:14:50.014 "progress": { 00:14:50.014 "blocks": 22528, 00:14:50.014 "percent": 17 00:14:50.014 } 00:14:50.014 }, 00:14:50.014 "base_bdevs_list": [ 00:14:50.014 { 00:14:50.014 "name": "spare", 00:14:50.014 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:50.014 "is_configured": true, 00:14:50.014 "data_offset": 0, 00:14:50.014 "data_size": 65536 00:14:50.014 }, 00:14:50.014 { 00:14:50.014 "name": "BaseBdev2", 00:14:50.014 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:50.014 "is_configured": true, 00:14:50.014 "data_offset": 0, 00:14:50.014 "data_size": 65536 00:14:50.014 }, 00:14:50.014 { 00:14:50.014 "name": "BaseBdev3", 00:14:50.014 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:50.014 "is_configured": true, 00:14:50.014 "data_offset": 0, 00:14:50.014 "data_size": 65536 00:14:50.014 } 00:14:50.014 ] 00:14:50.014 }' 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.014 03:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.953 "name": "raid_bdev1", 00:14:50.953 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:50.953 "strip_size_kb": 64, 00:14:50.953 "state": "online", 00:14:50.953 "raid_level": "raid5f", 00:14:50.953 "superblock": false, 00:14:50.953 "num_base_bdevs": 3, 00:14:50.953 "num_base_bdevs_discovered": 3, 00:14:50.953 "num_base_bdevs_operational": 3, 00:14:50.953 "process": { 00:14:50.953 "type": "rebuild", 00:14:50.953 "target": "spare", 00:14:50.953 "progress": { 00:14:50.953 "blocks": 45056, 00:14:50.953 "percent": 34 00:14:50.953 } 00:14:50.953 }, 00:14:50.953 "base_bdevs_list": [ 00:14:50.953 { 00:14:50.953 "name": "spare", 00:14:50.953 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:50.953 "is_configured": true, 00:14:50.953 "data_offset": 0, 00:14:50.953 "data_size": 65536 00:14:50.953 }, 00:14:50.953 { 00:14:50.953 "name": "BaseBdev2", 00:14:50.953 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:50.953 "is_configured": true, 00:14:50.953 "data_offset": 0, 00:14:50.953 "data_size": 65536 00:14:50.953 }, 00:14:50.953 { 00:14:50.953 "name": "BaseBdev3", 00:14:50.953 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:50.953 "is_configured": true, 00:14:50.953 "data_offset": 0, 00:14:50.953 "data_size": 65536 00:14:50.953 } 00:14:50.953 ] 00:14:50.953 }' 00:14:50.953 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.213 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.213 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.213 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.213 03:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.154 "name": "raid_bdev1", 00:14:52.154 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:52.154 "strip_size_kb": 64, 00:14:52.154 "state": "online", 00:14:52.154 "raid_level": "raid5f", 00:14:52.154 "superblock": false, 00:14:52.154 "num_base_bdevs": 3, 00:14:52.154 "num_base_bdevs_discovered": 3, 00:14:52.154 "num_base_bdevs_operational": 3, 00:14:52.154 "process": { 00:14:52.154 "type": "rebuild", 00:14:52.154 "target": "spare", 00:14:52.154 "progress": { 00:14:52.154 "blocks": 69632, 00:14:52.154 "percent": 53 00:14:52.154 } 00:14:52.154 }, 00:14:52.154 "base_bdevs_list": [ 00:14:52.154 { 00:14:52.154 "name": "spare", 00:14:52.154 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:52.154 "is_configured": true, 00:14:52.154 "data_offset": 0, 00:14:52.154 "data_size": 65536 00:14:52.154 }, 00:14:52.154 { 00:14:52.154 "name": "BaseBdev2", 00:14:52.154 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:52.154 "is_configured": true, 00:14:52.154 "data_offset": 0, 00:14:52.154 "data_size": 65536 00:14:52.154 }, 00:14:52.154 { 00:14:52.154 "name": "BaseBdev3", 00:14:52.154 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:52.154 "is_configured": true, 00:14:52.154 "data_offset": 0, 00:14:52.154 "data_size": 65536 00:14:52.154 } 00:14:52.154 ] 00:14:52.154 }' 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.154 03:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.536 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.536 "name": "raid_bdev1", 00:14:53.536 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:53.536 "strip_size_kb": 64, 00:14:53.536 "state": "online", 00:14:53.536 "raid_level": "raid5f", 00:14:53.536 "superblock": false, 00:14:53.536 "num_base_bdevs": 3, 00:14:53.536 "num_base_bdevs_discovered": 3, 00:14:53.536 "num_base_bdevs_operational": 3, 00:14:53.536 "process": { 00:14:53.536 "type": "rebuild", 00:14:53.536 "target": "spare", 00:14:53.536 "progress": { 00:14:53.536 "blocks": 92160, 00:14:53.536 "percent": 70 00:14:53.536 } 00:14:53.536 }, 00:14:53.536 "base_bdevs_list": [ 00:14:53.536 { 00:14:53.536 "name": "spare", 00:14:53.536 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:53.536 "is_configured": true, 00:14:53.536 "data_offset": 0, 00:14:53.536 "data_size": 65536 00:14:53.536 }, 00:14:53.536 { 00:14:53.536 "name": "BaseBdev2", 00:14:53.536 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:53.536 "is_configured": true, 00:14:53.536 "data_offset": 0, 00:14:53.536 "data_size": 65536 00:14:53.536 }, 00:14:53.536 { 00:14:53.536 "name": "BaseBdev3", 00:14:53.536 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:53.536 "is_configured": true, 00:14:53.536 "data_offset": 0, 00:14:53.536 "data_size": 65536 00:14:53.537 } 00:14:53.537 ] 00:14:53.537 }' 00:14:53.537 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.537 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.537 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.537 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.537 03:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.477 "name": "raid_bdev1", 00:14:54.477 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:54.477 "strip_size_kb": 64, 00:14:54.477 "state": "online", 00:14:54.477 "raid_level": "raid5f", 00:14:54.477 "superblock": false, 00:14:54.477 "num_base_bdevs": 3, 00:14:54.477 "num_base_bdevs_discovered": 3, 00:14:54.477 "num_base_bdevs_operational": 3, 00:14:54.477 "process": { 00:14:54.477 "type": "rebuild", 00:14:54.477 "target": "spare", 00:14:54.477 "progress": { 00:14:54.477 "blocks": 114688, 00:14:54.477 "percent": 87 00:14:54.477 } 00:14:54.477 }, 00:14:54.477 "base_bdevs_list": [ 00:14:54.477 { 00:14:54.477 "name": "spare", 00:14:54.477 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:54.477 "is_configured": true, 00:14:54.477 "data_offset": 0, 00:14:54.477 "data_size": 65536 00:14:54.477 }, 00:14:54.477 { 00:14:54.477 "name": "BaseBdev2", 00:14:54.477 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:54.477 "is_configured": true, 00:14:54.477 "data_offset": 0, 00:14:54.477 "data_size": 65536 00:14:54.477 }, 00:14:54.477 { 00:14:54.477 "name": "BaseBdev3", 00:14:54.477 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:54.477 "is_configured": true, 00:14:54.477 "data_offset": 0, 00:14:54.477 "data_size": 65536 00:14:54.477 } 00:14:54.477 ] 00:14:54.477 }' 00:14:54.477 03:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.477 03:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.477 03:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.477 03:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.477 03:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.048 [2024-11-20 03:21:44.671309] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:55.048 [2024-11-20 03:21:44.671388] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:55.048 [2024-11-20 03:21:44.671429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.616 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.616 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.616 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.617 "name": "raid_bdev1", 00:14:55.617 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:55.617 "strip_size_kb": 64, 00:14:55.617 "state": "online", 00:14:55.617 "raid_level": "raid5f", 00:14:55.617 "superblock": false, 00:14:55.617 "num_base_bdevs": 3, 00:14:55.617 "num_base_bdevs_discovered": 3, 00:14:55.617 "num_base_bdevs_operational": 3, 00:14:55.617 "base_bdevs_list": [ 00:14:55.617 { 00:14:55.617 "name": "spare", 00:14:55.617 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:55.617 "is_configured": true, 00:14:55.617 "data_offset": 0, 00:14:55.617 "data_size": 65536 00:14:55.617 }, 00:14:55.617 { 00:14:55.617 "name": "BaseBdev2", 00:14:55.617 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:55.617 "is_configured": true, 00:14:55.617 "data_offset": 0, 00:14:55.617 "data_size": 65536 00:14:55.617 }, 00:14:55.617 { 00:14:55.617 "name": "BaseBdev3", 00:14:55.617 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:55.617 "is_configured": true, 00:14:55.617 "data_offset": 0, 00:14:55.617 "data_size": 65536 00:14:55.617 } 00:14:55.617 ] 00:14:55.617 }' 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.617 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.877 "name": "raid_bdev1", 00:14:55.877 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:55.877 "strip_size_kb": 64, 00:14:55.877 "state": "online", 00:14:55.877 "raid_level": "raid5f", 00:14:55.877 "superblock": false, 00:14:55.877 "num_base_bdevs": 3, 00:14:55.877 "num_base_bdevs_discovered": 3, 00:14:55.877 "num_base_bdevs_operational": 3, 00:14:55.877 "base_bdevs_list": [ 00:14:55.877 { 00:14:55.877 "name": "spare", 00:14:55.877 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:55.877 "is_configured": true, 00:14:55.877 "data_offset": 0, 00:14:55.877 "data_size": 65536 00:14:55.877 }, 00:14:55.877 { 00:14:55.877 "name": "BaseBdev2", 00:14:55.877 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:55.877 "is_configured": true, 00:14:55.877 "data_offset": 0, 00:14:55.877 "data_size": 65536 00:14:55.877 }, 00:14:55.877 { 00:14:55.877 "name": "BaseBdev3", 00:14:55.877 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:55.877 "is_configured": true, 00:14:55.877 "data_offset": 0, 00:14:55.877 "data_size": 65536 00:14:55.877 } 00:14:55.877 ] 00:14:55.877 }' 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.877 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.878 "name": "raid_bdev1", 00:14:55.878 "uuid": "427000b0-d4ec-41a9-bbf2-8865eb7c2a27", 00:14:55.878 "strip_size_kb": 64, 00:14:55.878 "state": "online", 00:14:55.878 "raid_level": "raid5f", 00:14:55.878 "superblock": false, 00:14:55.878 "num_base_bdevs": 3, 00:14:55.878 "num_base_bdevs_discovered": 3, 00:14:55.878 "num_base_bdevs_operational": 3, 00:14:55.878 "base_bdevs_list": [ 00:14:55.878 { 00:14:55.878 "name": "spare", 00:14:55.878 "uuid": "39a94b45-b3cb-57ec-b2b6-6aab7b564d93", 00:14:55.878 "is_configured": true, 00:14:55.878 "data_offset": 0, 00:14:55.878 "data_size": 65536 00:14:55.878 }, 00:14:55.878 { 00:14:55.878 "name": "BaseBdev2", 00:14:55.878 "uuid": "8df548c7-edcb-564d-880d-097df230190d", 00:14:55.878 "is_configured": true, 00:14:55.878 "data_offset": 0, 00:14:55.878 "data_size": 65536 00:14:55.878 }, 00:14:55.878 { 00:14:55.878 "name": "BaseBdev3", 00:14:55.878 "uuid": "b82e12f6-d486-5c81-82cf-85187a147aa7", 00:14:55.878 "is_configured": true, 00:14:55.878 "data_offset": 0, 00:14:55.878 "data_size": 65536 00:14:55.878 } 00:14:55.878 ] 00:14:55.878 }' 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.878 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.448 [2024-11-20 03:21:45.846510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.448 [2024-11-20 03:21:45.846541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.448 [2024-11-20 03:21:45.846645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.448 [2024-11-20 03:21:45.846725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.448 [2024-11-20 03:21:45.846741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.448 03:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:56.708 /dev/nbd0 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.708 1+0 records in 00:14:56.708 1+0 records out 00:14:56.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435035 s, 9.4 MB/s 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.708 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:56.967 /dev/nbd1 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.967 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.967 1+0 records in 00:14:56.967 1+0 records out 00:14:56.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256654 s, 16.0 MB/s 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.968 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.227 03:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81385 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81385 ']' 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81385 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:57.486 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.487 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81385 00:14:57.487 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.487 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.487 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81385' 00:14:57.487 killing process with pid 81385 00:14:57.487 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81385 00:14:57.487 Received shutdown signal, test time was about 60.000000 seconds 00:14:57.487 00:14:57.487 Latency(us) 00:14:57.487 [2024-11-20T03:21:47.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.487 [2024-11-20T03:21:47.122Z] =================================================================================================================== 00:14:57.487 [2024-11-20T03:21:47.122Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:57.487 [2024-11-20 03:21:47.065718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.487 03:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81385 00:14:58.057 [2024-11-20 03:21:47.457727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:58.997 00:14:58.997 real 0m15.081s 00:14:58.997 user 0m18.460s 00:14:58.997 sys 0m1.955s 00:14:58.997 ************************************ 00:14:58.997 END TEST raid5f_rebuild_test 00:14:58.997 ************************************ 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.997 03:21:48 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:58.997 03:21:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:58.997 03:21:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.997 03:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.997 ************************************ 00:14:58.997 START TEST raid5f_rebuild_test_sb 00:14:58.997 ************************************ 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81824 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81824 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81824 ']' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.997 03:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.257 [2024-11-20 03:21:48.685096] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:59.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.257 Zero copy mechanism will not be used. 00:14:59.258 [2024-11-20 03:21:48.685289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81824 ] 00:14:59.258 [2024-11-20 03:21:48.837390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.517 [2024-11-20 03:21:48.947732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.517 [2024-11-20 03:21:49.145442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.517 [2024-11-20 03:21:49.145498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.087 BaseBdev1_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.087 [2024-11-20 03:21:49.557943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:00.087 [2024-11-20 03:21:49.558010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.087 [2024-11-20 03:21:49.558034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:00.087 [2024-11-20 03:21:49.558045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.087 [2024-11-20 03:21:49.560094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.087 [2024-11-20 03:21:49.560132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:00.087 BaseBdev1 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.087 BaseBdev2_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.087 [2024-11-20 03:21:49.610948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:00.087 [2024-11-20 03:21:49.611012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.087 [2024-11-20 03:21:49.611032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.087 [2024-11-20 03:21:49.611042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.087 [2024-11-20 03:21:49.613128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.087 [2024-11-20 03:21:49.613223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:00.087 BaseBdev2 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.087 BaseBdev3_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.087 [2024-11-20 03:21:49.677115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:00.087 [2024-11-20 03:21:49.677167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.087 [2024-11-20 03:21:49.677189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.087 [2024-11-20 03:21:49.677200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.087 [2024-11-20 03:21:49.679250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.087 [2024-11-20 03:21:49.679287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:00.087 BaseBdev3 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.087 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.347 spare_malloc 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.347 spare_delay 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:00.347 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.348 [2024-11-20 03:21:49.743845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:00.348 [2024-11-20 03:21:49.743948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.348 [2024-11-20 03:21:49.743971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:00.348 [2024-11-20 03:21:49.743982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.348 [2024-11-20 03:21:49.746114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.348 [2024-11-20 03:21:49.746156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:00.348 spare 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.348 [2024-11-20 03:21:49.755899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.348 [2024-11-20 03:21:49.757677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.348 [2024-11-20 03:21:49.757738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.348 [2024-11-20 03:21:49.757914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:00.348 [2024-11-20 03:21:49.757928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:00.348 [2024-11-20 03:21:49.758192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:00.348 [2024-11-20 03:21:49.763895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:00.348 [2024-11-20 03:21:49.763916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:00.348 [2024-11-20 03:21:49.764131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.348 "name": "raid_bdev1", 00:15:00.348 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:00.348 "strip_size_kb": 64, 00:15:00.348 "state": "online", 00:15:00.348 "raid_level": "raid5f", 00:15:00.348 "superblock": true, 00:15:00.348 "num_base_bdevs": 3, 00:15:00.348 "num_base_bdevs_discovered": 3, 00:15:00.348 "num_base_bdevs_operational": 3, 00:15:00.348 "base_bdevs_list": [ 00:15:00.348 { 00:15:00.348 "name": "BaseBdev1", 00:15:00.348 "uuid": "6a01806b-498d-5ae1-a5e2-348db7d70afb", 00:15:00.348 "is_configured": true, 00:15:00.348 "data_offset": 2048, 00:15:00.348 "data_size": 63488 00:15:00.348 }, 00:15:00.348 { 00:15:00.348 "name": "BaseBdev2", 00:15:00.348 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:00.348 "is_configured": true, 00:15:00.348 "data_offset": 2048, 00:15:00.348 "data_size": 63488 00:15:00.348 }, 00:15:00.348 { 00:15:00.348 "name": "BaseBdev3", 00:15:00.348 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:00.348 "is_configured": true, 00:15:00.348 "data_offset": 2048, 00:15:00.348 "data_size": 63488 00:15:00.348 } 00:15:00.348 ] 00:15:00.348 }' 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.348 03:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.608 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:00.608 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.608 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.608 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.608 [2024-11-20 03:21:50.234328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.868 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:00.868 [2024-11-20 03:21:50.489760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:01.128 /dev/nbd0 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.128 1+0 records in 00:15:01.128 1+0 records out 00:15:01.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368128 s, 11.1 MB/s 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.128 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.129 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:01.129 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:01.129 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:01.129 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:01.389 496+0 records in 00:15:01.389 496+0 records out 00:15:01.389 65011712 bytes (65 MB, 62 MiB) copied, 0.348505 s, 187 MB/s 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.389 03:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:01.649 [2024-11-20 03:21:51.113318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.649 [2024-11-20 03:21:51.144851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.649 "name": "raid_bdev1", 00:15:01.649 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:01.649 "strip_size_kb": 64, 00:15:01.649 "state": "online", 00:15:01.649 "raid_level": "raid5f", 00:15:01.649 "superblock": true, 00:15:01.649 "num_base_bdevs": 3, 00:15:01.649 "num_base_bdevs_discovered": 2, 00:15:01.649 "num_base_bdevs_operational": 2, 00:15:01.649 "base_bdevs_list": [ 00:15:01.649 { 00:15:01.649 "name": null, 00:15:01.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.649 "is_configured": false, 00:15:01.649 "data_offset": 0, 00:15:01.649 "data_size": 63488 00:15:01.649 }, 00:15:01.649 { 00:15:01.649 "name": "BaseBdev2", 00:15:01.649 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:01.649 "is_configured": true, 00:15:01.649 "data_offset": 2048, 00:15:01.649 "data_size": 63488 00:15:01.649 }, 00:15:01.649 { 00:15:01.649 "name": "BaseBdev3", 00:15:01.649 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:01.649 "is_configured": true, 00:15:01.649 "data_offset": 2048, 00:15:01.649 "data_size": 63488 00:15:01.649 } 00:15:01.649 ] 00:15:01.649 }' 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.649 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.219 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.219 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.219 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.219 [2024-11-20 03:21:51.616035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.219 [2024-11-20 03:21:51.633094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:02.219 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.219 03:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:02.219 [2024-11-20 03:21:51.640448] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.159 "name": "raid_bdev1", 00:15:03.159 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:03.159 "strip_size_kb": 64, 00:15:03.159 "state": "online", 00:15:03.159 "raid_level": "raid5f", 00:15:03.159 "superblock": true, 00:15:03.159 "num_base_bdevs": 3, 00:15:03.159 "num_base_bdevs_discovered": 3, 00:15:03.159 "num_base_bdevs_operational": 3, 00:15:03.159 "process": { 00:15:03.159 "type": "rebuild", 00:15:03.159 "target": "spare", 00:15:03.159 "progress": { 00:15:03.159 "blocks": 20480, 00:15:03.159 "percent": 16 00:15:03.159 } 00:15:03.159 }, 00:15:03.159 "base_bdevs_list": [ 00:15:03.159 { 00:15:03.159 "name": "spare", 00:15:03.159 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:03.159 "is_configured": true, 00:15:03.159 "data_offset": 2048, 00:15:03.159 "data_size": 63488 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "name": "BaseBdev2", 00:15:03.159 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:03.159 "is_configured": true, 00:15:03.159 "data_offset": 2048, 00:15:03.159 "data_size": 63488 00:15:03.159 }, 00:15:03.159 { 00:15:03.159 "name": "BaseBdev3", 00:15:03.159 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:03.159 "is_configured": true, 00:15:03.159 "data_offset": 2048, 00:15:03.159 "data_size": 63488 00:15:03.159 } 00:15:03.159 ] 00:15:03.159 }' 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.159 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.420 [2024-11-20 03:21:52.795215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.420 [2024-11-20 03:21:52.848799] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.420 [2024-11-20 03:21:52.848866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.420 [2024-11-20 03:21:52.848887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.420 [2024-11-20 03:21:52.848896] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.420 "name": "raid_bdev1", 00:15:03.420 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:03.420 "strip_size_kb": 64, 00:15:03.420 "state": "online", 00:15:03.420 "raid_level": "raid5f", 00:15:03.420 "superblock": true, 00:15:03.420 "num_base_bdevs": 3, 00:15:03.420 "num_base_bdevs_discovered": 2, 00:15:03.420 "num_base_bdevs_operational": 2, 00:15:03.420 "base_bdevs_list": [ 00:15:03.420 { 00:15:03.420 "name": null, 00:15:03.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.420 "is_configured": false, 00:15:03.420 "data_offset": 0, 00:15:03.420 "data_size": 63488 00:15:03.420 }, 00:15:03.420 { 00:15:03.420 "name": "BaseBdev2", 00:15:03.420 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:03.420 "is_configured": true, 00:15:03.420 "data_offset": 2048, 00:15:03.420 "data_size": 63488 00:15:03.420 }, 00:15:03.420 { 00:15:03.420 "name": "BaseBdev3", 00:15:03.420 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:03.420 "is_configured": true, 00:15:03.420 "data_offset": 2048, 00:15:03.420 "data_size": 63488 00:15:03.420 } 00:15:03.420 ] 00:15:03.420 }' 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.420 03:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.680 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.680 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.680 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.680 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.680 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.940 "name": "raid_bdev1", 00:15:03.940 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:03.940 "strip_size_kb": 64, 00:15:03.940 "state": "online", 00:15:03.940 "raid_level": "raid5f", 00:15:03.940 "superblock": true, 00:15:03.940 "num_base_bdevs": 3, 00:15:03.940 "num_base_bdevs_discovered": 2, 00:15:03.940 "num_base_bdevs_operational": 2, 00:15:03.940 "base_bdevs_list": [ 00:15:03.940 { 00:15:03.940 "name": null, 00:15:03.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.940 "is_configured": false, 00:15:03.940 "data_offset": 0, 00:15:03.940 "data_size": 63488 00:15:03.940 }, 00:15:03.940 { 00:15:03.940 "name": "BaseBdev2", 00:15:03.940 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:03.940 "is_configured": true, 00:15:03.940 "data_offset": 2048, 00:15:03.940 "data_size": 63488 00:15:03.940 }, 00:15:03.940 { 00:15:03.940 "name": "BaseBdev3", 00:15:03.940 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:03.940 "is_configured": true, 00:15:03.940 "data_offset": 2048, 00:15:03.940 "data_size": 63488 00:15:03.940 } 00:15:03.940 ] 00:15:03.940 }' 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.940 [2024-11-20 03:21:53.453653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.940 [2024-11-20 03:21:53.470718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.940 03:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:03.940 [2024-11-20 03:21:53.478984] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.880 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.141 "name": "raid_bdev1", 00:15:05.141 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:05.141 "strip_size_kb": 64, 00:15:05.141 "state": "online", 00:15:05.141 "raid_level": "raid5f", 00:15:05.141 "superblock": true, 00:15:05.141 "num_base_bdevs": 3, 00:15:05.141 "num_base_bdevs_discovered": 3, 00:15:05.141 "num_base_bdevs_operational": 3, 00:15:05.141 "process": { 00:15:05.141 "type": "rebuild", 00:15:05.141 "target": "spare", 00:15:05.141 "progress": { 00:15:05.141 "blocks": 18432, 00:15:05.141 "percent": 14 00:15:05.141 } 00:15:05.141 }, 00:15:05.141 "base_bdevs_list": [ 00:15:05.141 { 00:15:05.141 "name": "spare", 00:15:05.141 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:05.141 "is_configured": true, 00:15:05.141 "data_offset": 2048, 00:15:05.141 "data_size": 63488 00:15:05.141 }, 00:15:05.141 { 00:15:05.141 "name": "BaseBdev2", 00:15:05.141 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:05.141 "is_configured": true, 00:15:05.141 "data_offset": 2048, 00:15:05.141 "data_size": 63488 00:15:05.141 }, 00:15:05.141 { 00:15:05.141 "name": "BaseBdev3", 00:15:05.141 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:05.141 "is_configured": true, 00:15:05.141 "data_offset": 2048, 00:15:05.141 "data_size": 63488 00:15:05.141 } 00:15:05.141 ] 00:15:05.141 }' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:05.141 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=558 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.141 "name": "raid_bdev1", 00:15:05.141 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:05.141 "strip_size_kb": 64, 00:15:05.141 "state": "online", 00:15:05.141 "raid_level": "raid5f", 00:15:05.141 "superblock": true, 00:15:05.141 "num_base_bdevs": 3, 00:15:05.141 "num_base_bdevs_discovered": 3, 00:15:05.141 "num_base_bdevs_operational": 3, 00:15:05.141 "process": { 00:15:05.141 "type": "rebuild", 00:15:05.141 "target": "spare", 00:15:05.141 "progress": { 00:15:05.141 "blocks": 22528, 00:15:05.141 "percent": 17 00:15:05.141 } 00:15:05.141 }, 00:15:05.141 "base_bdevs_list": [ 00:15:05.141 { 00:15:05.141 "name": "spare", 00:15:05.141 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:05.141 "is_configured": true, 00:15:05.141 "data_offset": 2048, 00:15:05.141 "data_size": 63488 00:15:05.141 }, 00:15:05.141 { 00:15:05.141 "name": "BaseBdev2", 00:15:05.141 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:05.141 "is_configured": true, 00:15:05.141 "data_offset": 2048, 00:15:05.141 "data_size": 63488 00:15:05.141 }, 00:15:05.141 { 00:15:05.141 "name": "BaseBdev3", 00:15:05.141 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:05.141 "is_configured": true, 00:15:05.141 "data_offset": 2048, 00:15:05.141 "data_size": 63488 00:15:05.141 } 00:15:05.141 ] 00:15:05.141 }' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.141 03:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.521 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.521 "name": "raid_bdev1", 00:15:06.521 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:06.521 "strip_size_kb": 64, 00:15:06.521 "state": "online", 00:15:06.521 "raid_level": "raid5f", 00:15:06.521 "superblock": true, 00:15:06.521 "num_base_bdevs": 3, 00:15:06.521 "num_base_bdevs_discovered": 3, 00:15:06.521 "num_base_bdevs_operational": 3, 00:15:06.521 "process": { 00:15:06.521 "type": "rebuild", 00:15:06.521 "target": "spare", 00:15:06.521 "progress": { 00:15:06.522 "blocks": 45056, 00:15:06.522 "percent": 35 00:15:06.522 } 00:15:06.522 }, 00:15:06.522 "base_bdevs_list": [ 00:15:06.522 { 00:15:06.522 "name": "spare", 00:15:06.522 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:06.522 "is_configured": true, 00:15:06.522 "data_offset": 2048, 00:15:06.522 "data_size": 63488 00:15:06.522 }, 00:15:06.522 { 00:15:06.522 "name": "BaseBdev2", 00:15:06.522 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:06.522 "is_configured": true, 00:15:06.522 "data_offset": 2048, 00:15:06.522 "data_size": 63488 00:15:06.522 }, 00:15:06.522 { 00:15:06.522 "name": "BaseBdev3", 00:15:06.522 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:06.522 "is_configured": true, 00:15:06.522 "data_offset": 2048, 00:15:06.522 "data_size": 63488 00:15:06.522 } 00:15:06.522 ] 00:15:06.522 }' 00:15:06.522 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.522 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.522 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.522 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.522 03:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.459 "name": "raid_bdev1", 00:15:07.459 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:07.459 "strip_size_kb": 64, 00:15:07.459 "state": "online", 00:15:07.459 "raid_level": "raid5f", 00:15:07.459 "superblock": true, 00:15:07.459 "num_base_bdevs": 3, 00:15:07.459 "num_base_bdevs_discovered": 3, 00:15:07.459 "num_base_bdevs_operational": 3, 00:15:07.459 "process": { 00:15:07.459 "type": "rebuild", 00:15:07.459 "target": "spare", 00:15:07.459 "progress": { 00:15:07.459 "blocks": 69632, 00:15:07.459 "percent": 54 00:15:07.459 } 00:15:07.459 }, 00:15:07.459 "base_bdevs_list": [ 00:15:07.459 { 00:15:07.459 "name": "spare", 00:15:07.459 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:07.459 "is_configured": true, 00:15:07.459 "data_offset": 2048, 00:15:07.459 "data_size": 63488 00:15:07.459 }, 00:15:07.459 { 00:15:07.459 "name": "BaseBdev2", 00:15:07.459 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:07.459 "is_configured": true, 00:15:07.459 "data_offset": 2048, 00:15:07.459 "data_size": 63488 00:15:07.459 }, 00:15:07.459 { 00:15:07.459 "name": "BaseBdev3", 00:15:07.459 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:07.459 "is_configured": true, 00:15:07.459 "data_offset": 2048, 00:15:07.459 "data_size": 63488 00:15:07.459 } 00:15:07.459 ] 00:15:07.459 }' 00:15:07.459 03:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.459 03:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.459 03:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.459 03:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.459 03:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.840 "name": "raid_bdev1", 00:15:08.840 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:08.840 "strip_size_kb": 64, 00:15:08.840 "state": "online", 00:15:08.840 "raid_level": "raid5f", 00:15:08.840 "superblock": true, 00:15:08.840 "num_base_bdevs": 3, 00:15:08.840 "num_base_bdevs_discovered": 3, 00:15:08.840 "num_base_bdevs_operational": 3, 00:15:08.840 "process": { 00:15:08.840 "type": "rebuild", 00:15:08.840 "target": "spare", 00:15:08.840 "progress": { 00:15:08.840 "blocks": 92160, 00:15:08.840 "percent": 72 00:15:08.840 } 00:15:08.840 }, 00:15:08.840 "base_bdevs_list": [ 00:15:08.840 { 00:15:08.840 "name": "spare", 00:15:08.840 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:08.840 "is_configured": true, 00:15:08.840 "data_offset": 2048, 00:15:08.840 "data_size": 63488 00:15:08.840 }, 00:15:08.840 { 00:15:08.840 "name": "BaseBdev2", 00:15:08.840 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:08.840 "is_configured": true, 00:15:08.840 "data_offset": 2048, 00:15:08.840 "data_size": 63488 00:15:08.840 }, 00:15:08.840 { 00:15:08.840 "name": "BaseBdev3", 00:15:08.840 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:08.840 "is_configured": true, 00:15:08.840 "data_offset": 2048, 00:15:08.840 "data_size": 63488 00:15:08.840 } 00:15:08.840 ] 00:15:08.840 }' 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.840 03:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.778 "name": "raid_bdev1", 00:15:09.778 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:09.778 "strip_size_kb": 64, 00:15:09.778 "state": "online", 00:15:09.778 "raid_level": "raid5f", 00:15:09.778 "superblock": true, 00:15:09.778 "num_base_bdevs": 3, 00:15:09.778 "num_base_bdevs_discovered": 3, 00:15:09.778 "num_base_bdevs_operational": 3, 00:15:09.778 "process": { 00:15:09.778 "type": "rebuild", 00:15:09.778 "target": "spare", 00:15:09.778 "progress": { 00:15:09.778 "blocks": 116736, 00:15:09.778 "percent": 91 00:15:09.778 } 00:15:09.778 }, 00:15:09.778 "base_bdevs_list": [ 00:15:09.778 { 00:15:09.778 "name": "spare", 00:15:09.778 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:09.778 "is_configured": true, 00:15:09.778 "data_offset": 2048, 00:15:09.778 "data_size": 63488 00:15:09.778 }, 00:15:09.778 { 00:15:09.778 "name": "BaseBdev2", 00:15:09.778 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:09.778 "is_configured": true, 00:15:09.778 "data_offset": 2048, 00:15:09.778 "data_size": 63488 00:15:09.778 }, 00:15:09.778 { 00:15:09.778 "name": "BaseBdev3", 00:15:09.778 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:09.778 "is_configured": true, 00:15:09.778 "data_offset": 2048, 00:15:09.778 "data_size": 63488 00:15:09.778 } 00:15:09.778 ] 00:15:09.778 }' 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.778 03:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.362 [2024-11-20 03:21:59.723222] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:10.362 [2024-11-20 03:21:59.723315] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:10.362 [2024-11-20 03:21:59.723441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.934 "name": "raid_bdev1", 00:15:10.934 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:10.934 "strip_size_kb": 64, 00:15:10.934 "state": "online", 00:15:10.934 "raid_level": "raid5f", 00:15:10.934 "superblock": true, 00:15:10.934 "num_base_bdevs": 3, 00:15:10.934 "num_base_bdevs_discovered": 3, 00:15:10.934 "num_base_bdevs_operational": 3, 00:15:10.934 "base_bdevs_list": [ 00:15:10.934 { 00:15:10.934 "name": "spare", 00:15:10.934 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:10.934 "is_configured": true, 00:15:10.934 "data_offset": 2048, 00:15:10.934 "data_size": 63488 00:15:10.934 }, 00:15:10.934 { 00:15:10.934 "name": "BaseBdev2", 00:15:10.934 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:10.934 "is_configured": true, 00:15:10.934 "data_offset": 2048, 00:15:10.934 "data_size": 63488 00:15:10.934 }, 00:15:10.934 { 00:15:10.934 "name": "BaseBdev3", 00:15:10.934 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:10.934 "is_configured": true, 00:15:10.934 "data_offset": 2048, 00:15:10.934 "data_size": 63488 00:15:10.934 } 00:15:10.934 ] 00:15:10.934 }' 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.934 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.934 "name": "raid_bdev1", 00:15:10.934 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:10.934 "strip_size_kb": 64, 00:15:10.934 "state": "online", 00:15:10.935 "raid_level": "raid5f", 00:15:10.935 "superblock": true, 00:15:10.935 "num_base_bdevs": 3, 00:15:10.935 "num_base_bdevs_discovered": 3, 00:15:10.935 "num_base_bdevs_operational": 3, 00:15:10.935 "base_bdevs_list": [ 00:15:10.935 { 00:15:10.935 "name": "spare", 00:15:10.935 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:10.935 "is_configured": true, 00:15:10.935 "data_offset": 2048, 00:15:10.935 "data_size": 63488 00:15:10.935 }, 00:15:10.935 { 00:15:10.935 "name": "BaseBdev2", 00:15:10.935 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:10.935 "is_configured": true, 00:15:10.935 "data_offset": 2048, 00:15:10.935 "data_size": 63488 00:15:10.935 }, 00:15:10.935 { 00:15:10.935 "name": "BaseBdev3", 00:15:10.935 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:10.935 "is_configured": true, 00:15:10.935 "data_offset": 2048, 00:15:10.935 "data_size": 63488 00:15:10.935 } 00:15:10.935 ] 00:15:10.935 }' 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.194 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.194 "name": "raid_bdev1", 00:15:11.194 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:11.194 "strip_size_kb": 64, 00:15:11.194 "state": "online", 00:15:11.194 "raid_level": "raid5f", 00:15:11.194 "superblock": true, 00:15:11.194 "num_base_bdevs": 3, 00:15:11.194 "num_base_bdevs_discovered": 3, 00:15:11.194 "num_base_bdevs_operational": 3, 00:15:11.194 "base_bdevs_list": [ 00:15:11.194 { 00:15:11.194 "name": "spare", 00:15:11.194 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:11.195 "is_configured": true, 00:15:11.195 "data_offset": 2048, 00:15:11.195 "data_size": 63488 00:15:11.195 }, 00:15:11.195 { 00:15:11.195 "name": "BaseBdev2", 00:15:11.195 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:11.195 "is_configured": true, 00:15:11.195 "data_offset": 2048, 00:15:11.195 "data_size": 63488 00:15:11.195 }, 00:15:11.195 { 00:15:11.195 "name": "BaseBdev3", 00:15:11.195 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:11.195 "is_configured": true, 00:15:11.195 "data_offset": 2048, 00:15:11.195 "data_size": 63488 00:15:11.195 } 00:15:11.195 ] 00:15:11.195 }' 00:15:11.195 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.195 03:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.454 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.454 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.454 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.454 [2024-11-20 03:22:01.085807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.454 [2024-11-20 03:22:01.085899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.454 [2024-11-20 03:22:01.086026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.454 [2024-11-20 03:22:01.086140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.454 [2024-11-20 03:22:01.086206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.713 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:11.714 /dev/nbd0 00:15:11.714 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.974 1+0 records in 00:15:11.974 1+0 records out 00:15:11.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244299 s, 16.8 MB/s 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:11.974 /dev/nbd1 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.974 1+0 records in 00:15:11.974 1+0 records out 00:15:11.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390854 s, 10.5 MB/s 00:15:11.974 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.234 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:12.234 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.234 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.234 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:12.234 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.234 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.235 03:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.495 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.755 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.755 [2024-11-20 03:22:02.270914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.755 [2024-11-20 03:22:02.270980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.755 [2024-11-20 03:22:02.271003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:12.756 [2024-11-20 03:22:02.271016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.756 [2024-11-20 03:22:02.273459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.756 [2024-11-20 03:22:02.273501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.756 [2024-11-20 03:22:02.273587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.756 [2024-11-20 03:22:02.273664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.756 [2024-11-20 03:22:02.273818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.756 [2024-11-20 03:22:02.273980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.756 spare 00:15:12.756 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.756 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:12.756 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.756 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.756 [2024-11-20 03:22:02.373882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:12.756 [2024-11-20 03:22:02.373914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:12.756 [2024-11-20 03:22:02.374202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:12.756 [2024-11-20 03:22:02.379848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:12.756 [2024-11-20 03:22:02.379881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:12.756 [2024-11-20 03:22:02.380080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.016 "name": "raid_bdev1", 00:15:13.016 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:13.016 "strip_size_kb": 64, 00:15:13.016 "state": "online", 00:15:13.016 "raid_level": "raid5f", 00:15:13.016 "superblock": true, 00:15:13.016 "num_base_bdevs": 3, 00:15:13.016 "num_base_bdevs_discovered": 3, 00:15:13.016 "num_base_bdevs_operational": 3, 00:15:13.016 "base_bdevs_list": [ 00:15:13.016 { 00:15:13.016 "name": "spare", 00:15:13.016 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:13.016 "is_configured": true, 00:15:13.016 "data_offset": 2048, 00:15:13.016 "data_size": 63488 00:15:13.016 }, 00:15:13.016 { 00:15:13.016 "name": "BaseBdev2", 00:15:13.016 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:13.016 "is_configured": true, 00:15:13.016 "data_offset": 2048, 00:15:13.016 "data_size": 63488 00:15:13.016 }, 00:15:13.016 { 00:15:13.016 "name": "BaseBdev3", 00:15:13.016 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:13.016 "is_configured": true, 00:15:13.016 "data_offset": 2048, 00:15:13.016 "data_size": 63488 00:15:13.016 } 00:15:13.016 ] 00:15:13.016 }' 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.016 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.276 "name": "raid_bdev1", 00:15:13.276 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:13.276 "strip_size_kb": 64, 00:15:13.276 "state": "online", 00:15:13.276 "raid_level": "raid5f", 00:15:13.276 "superblock": true, 00:15:13.276 "num_base_bdevs": 3, 00:15:13.276 "num_base_bdevs_discovered": 3, 00:15:13.276 "num_base_bdevs_operational": 3, 00:15:13.276 "base_bdevs_list": [ 00:15:13.276 { 00:15:13.276 "name": "spare", 00:15:13.276 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:13.276 "is_configured": true, 00:15:13.276 "data_offset": 2048, 00:15:13.276 "data_size": 63488 00:15:13.276 }, 00:15:13.276 { 00:15:13.276 "name": "BaseBdev2", 00:15:13.276 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:13.276 "is_configured": true, 00:15:13.276 "data_offset": 2048, 00:15:13.276 "data_size": 63488 00:15:13.276 }, 00:15:13.276 { 00:15:13.276 "name": "BaseBdev3", 00:15:13.276 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:13.276 "is_configured": true, 00:15:13.276 "data_offset": 2048, 00:15:13.276 "data_size": 63488 00:15:13.276 } 00:15:13.276 ] 00:15:13.276 }' 00:15:13.276 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.535 03:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.535 [2024-11-20 03:22:03.002129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.535 "name": "raid_bdev1", 00:15:13.535 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:13.535 "strip_size_kb": 64, 00:15:13.535 "state": "online", 00:15:13.535 "raid_level": "raid5f", 00:15:13.535 "superblock": true, 00:15:13.535 "num_base_bdevs": 3, 00:15:13.535 "num_base_bdevs_discovered": 2, 00:15:13.535 "num_base_bdevs_operational": 2, 00:15:13.535 "base_bdevs_list": [ 00:15:13.535 { 00:15:13.535 "name": null, 00:15:13.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.535 "is_configured": false, 00:15:13.535 "data_offset": 0, 00:15:13.535 "data_size": 63488 00:15:13.535 }, 00:15:13.535 { 00:15:13.535 "name": "BaseBdev2", 00:15:13.535 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:13.535 "is_configured": true, 00:15:13.535 "data_offset": 2048, 00:15:13.535 "data_size": 63488 00:15:13.535 }, 00:15:13.535 { 00:15:13.535 "name": "BaseBdev3", 00:15:13.535 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:13.535 "is_configured": true, 00:15:13.535 "data_offset": 2048, 00:15:13.535 "data_size": 63488 00:15:13.535 } 00:15:13.535 ] 00:15:13.535 }' 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.535 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.104 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.104 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.104 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.104 [2024-11-20 03:22:03.489407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.104 [2024-11-20 03:22:03.489698] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.104 [2024-11-20 03:22:03.489765] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:14.104 [2024-11-20 03:22:03.489811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.104 [2024-11-20 03:22:03.506884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:14.104 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.104 03:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:14.104 [2024-11-20 03:22:03.515027] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.043 "name": "raid_bdev1", 00:15:15.043 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:15.043 "strip_size_kb": 64, 00:15:15.043 "state": "online", 00:15:15.043 "raid_level": "raid5f", 00:15:15.043 "superblock": true, 00:15:15.043 "num_base_bdevs": 3, 00:15:15.043 "num_base_bdevs_discovered": 3, 00:15:15.043 "num_base_bdevs_operational": 3, 00:15:15.043 "process": { 00:15:15.043 "type": "rebuild", 00:15:15.043 "target": "spare", 00:15:15.043 "progress": { 00:15:15.043 "blocks": 20480, 00:15:15.043 "percent": 16 00:15:15.043 } 00:15:15.043 }, 00:15:15.043 "base_bdevs_list": [ 00:15:15.043 { 00:15:15.043 "name": "spare", 00:15:15.043 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:15.043 "is_configured": true, 00:15:15.043 "data_offset": 2048, 00:15:15.043 "data_size": 63488 00:15:15.043 }, 00:15:15.043 { 00:15:15.043 "name": "BaseBdev2", 00:15:15.043 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:15.043 "is_configured": true, 00:15:15.043 "data_offset": 2048, 00:15:15.043 "data_size": 63488 00:15:15.043 }, 00:15:15.043 { 00:15:15.043 "name": "BaseBdev3", 00:15:15.043 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:15.043 "is_configured": true, 00:15:15.043 "data_offset": 2048, 00:15:15.043 "data_size": 63488 00:15:15.043 } 00:15:15.043 ] 00:15:15.043 }' 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.043 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.043 [2024-11-20 03:22:04.661894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.302 [2024-11-20 03:22:04.723436] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.302 [2024-11-20 03:22:04.723501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.302 [2024-11-20 03:22:04.723518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.302 [2024-11-20 03:22:04.723528] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.302 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.303 "name": "raid_bdev1", 00:15:15.303 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:15.303 "strip_size_kb": 64, 00:15:15.303 "state": "online", 00:15:15.303 "raid_level": "raid5f", 00:15:15.303 "superblock": true, 00:15:15.303 "num_base_bdevs": 3, 00:15:15.303 "num_base_bdevs_discovered": 2, 00:15:15.303 "num_base_bdevs_operational": 2, 00:15:15.303 "base_bdevs_list": [ 00:15:15.303 { 00:15:15.303 "name": null, 00:15:15.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.303 "is_configured": false, 00:15:15.303 "data_offset": 0, 00:15:15.303 "data_size": 63488 00:15:15.303 }, 00:15:15.303 { 00:15:15.303 "name": "BaseBdev2", 00:15:15.303 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:15.303 "is_configured": true, 00:15:15.303 "data_offset": 2048, 00:15:15.303 "data_size": 63488 00:15:15.303 }, 00:15:15.303 { 00:15:15.303 "name": "BaseBdev3", 00:15:15.303 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:15.303 "is_configured": true, 00:15:15.303 "data_offset": 2048, 00:15:15.303 "data_size": 63488 00:15:15.303 } 00:15:15.303 ] 00:15:15.303 }' 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.303 03:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.870 03:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.870 03:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.870 03:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.870 [2024-11-20 03:22:05.230822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.870 [2024-11-20 03:22:05.230951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.870 [2024-11-20 03:22:05.230995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:15.870 [2024-11-20 03:22:05.231032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.870 [2024-11-20 03:22:05.231571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.870 [2024-11-20 03:22:05.231674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.870 [2024-11-20 03:22:05.231838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:15.870 [2024-11-20 03:22:05.231888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.870 [2024-11-20 03:22:05.231938] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.870 [2024-11-20 03:22:05.231998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.870 [2024-11-20 03:22:05.248016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:15.870 spare 00:15:15.870 03:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.870 03:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:15.870 [2024-11-20 03:22:05.255949] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.807 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.807 "name": "raid_bdev1", 00:15:16.807 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:16.807 "strip_size_kb": 64, 00:15:16.807 "state": "online", 00:15:16.807 "raid_level": "raid5f", 00:15:16.807 "superblock": true, 00:15:16.807 "num_base_bdevs": 3, 00:15:16.807 "num_base_bdevs_discovered": 3, 00:15:16.807 "num_base_bdevs_operational": 3, 00:15:16.807 "process": { 00:15:16.807 "type": "rebuild", 00:15:16.807 "target": "spare", 00:15:16.807 "progress": { 00:15:16.807 "blocks": 20480, 00:15:16.807 "percent": 16 00:15:16.807 } 00:15:16.807 }, 00:15:16.807 "base_bdevs_list": [ 00:15:16.807 { 00:15:16.807 "name": "spare", 00:15:16.807 "uuid": "859a372d-214f-5942-9892-fb267548b4b5", 00:15:16.807 "is_configured": true, 00:15:16.807 "data_offset": 2048, 00:15:16.807 "data_size": 63488 00:15:16.807 }, 00:15:16.807 { 00:15:16.807 "name": "BaseBdev2", 00:15:16.807 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:16.807 "is_configured": true, 00:15:16.807 "data_offset": 2048, 00:15:16.808 "data_size": 63488 00:15:16.808 }, 00:15:16.808 { 00:15:16.808 "name": "BaseBdev3", 00:15:16.808 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:16.808 "is_configured": true, 00:15:16.808 "data_offset": 2048, 00:15:16.808 "data_size": 63488 00:15:16.808 } 00:15:16.808 ] 00:15:16.808 }' 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.808 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.808 [2024-11-20 03:22:06.410825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.067 [2024-11-20 03:22:06.464320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.067 [2024-11-20 03:22:06.464429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.067 [2024-11-20 03:22:06.464450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.067 [2024-11-20 03:22:06.464458] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.067 "name": "raid_bdev1", 00:15:17.067 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:17.067 "strip_size_kb": 64, 00:15:17.067 "state": "online", 00:15:17.067 "raid_level": "raid5f", 00:15:17.067 "superblock": true, 00:15:17.067 "num_base_bdevs": 3, 00:15:17.067 "num_base_bdevs_discovered": 2, 00:15:17.067 "num_base_bdevs_operational": 2, 00:15:17.067 "base_bdevs_list": [ 00:15:17.067 { 00:15:17.067 "name": null, 00:15:17.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.067 "is_configured": false, 00:15:17.067 "data_offset": 0, 00:15:17.067 "data_size": 63488 00:15:17.067 }, 00:15:17.067 { 00:15:17.067 "name": "BaseBdev2", 00:15:17.067 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:17.067 "is_configured": true, 00:15:17.067 "data_offset": 2048, 00:15:17.067 "data_size": 63488 00:15:17.067 }, 00:15:17.067 { 00:15:17.067 "name": "BaseBdev3", 00:15:17.067 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:17.067 "is_configured": true, 00:15:17.067 "data_offset": 2048, 00:15:17.067 "data_size": 63488 00:15:17.067 } 00:15:17.067 ] 00:15:17.067 }' 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.067 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 03:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.636 "name": "raid_bdev1", 00:15:17.636 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:17.636 "strip_size_kb": 64, 00:15:17.636 "state": "online", 00:15:17.636 "raid_level": "raid5f", 00:15:17.636 "superblock": true, 00:15:17.636 "num_base_bdevs": 3, 00:15:17.636 "num_base_bdevs_discovered": 2, 00:15:17.636 "num_base_bdevs_operational": 2, 00:15:17.636 "base_bdevs_list": [ 00:15:17.636 { 00:15:17.636 "name": null, 00:15:17.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.636 "is_configured": false, 00:15:17.636 "data_offset": 0, 00:15:17.636 "data_size": 63488 00:15:17.636 }, 00:15:17.636 { 00:15:17.636 "name": "BaseBdev2", 00:15:17.636 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:17.636 "is_configured": true, 00:15:17.636 "data_offset": 2048, 00:15:17.636 "data_size": 63488 00:15:17.636 }, 00:15:17.636 { 00:15:17.636 "name": "BaseBdev3", 00:15:17.636 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:17.636 "is_configured": true, 00:15:17.636 "data_offset": 2048, 00:15:17.636 "data_size": 63488 00:15:17.636 } 00:15:17.636 ] 00:15:17.636 }' 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 [2024-11-20 03:22:07.125976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:17.636 [2024-11-20 03:22:07.126032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.636 [2024-11-20 03:22:07.126056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:17.636 [2024-11-20 03:22:07.126065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.636 [2024-11-20 03:22:07.126530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.636 [2024-11-20 03:22:07.126547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.636 [2024-11-20 03:22:07.126647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:17.636 [2024-11-20 03:22:07.126662] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.636 [2024-11-20 03:22:07.126682] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:17.636 [2024-11-20 03:22:07.126695] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:17.636 BaseBdev1 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 03:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.574 "name": "raid_bdev1", 00:15:18.574 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:18.574 "strip_size_kb": 64, 00:15:18.574 "state": "online", 00:15:18.574 "raid_level": "raid5f", 00:15:18.574 "superblock": true, 00:15:18.574 "num_base_bdevs": 3, 00:15:18.574 "num_base_bdevs_discovered": 2, 00:15:18.574 "num_base_bdevs_operational": 2, 00:15:18.574 "base_bdevs_list": [ 00:15:18.574 { 00:15:18.574 "name": null, 00:15:18.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.574 "is_configured": false, 00:15:18.574 "data_offset": 0, 00:15:18.574 "data_size": 63488 00:15:18.574 }, 00:15:18.574 { 00:15:18.574 "name": "BaseBdev2", 00:15:18.574 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:18.574 "is_configured": true, 00:15:18.574 "data_offset": 2048, 00:15:18.574 "data_size": 63488 00:15:18.574 }, 00:15:18.574 { 00:15:18.574 "name": "BaseBdev3", 00:15:18.574 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:18.574 "is_configured": true, 00:15:18.574 "data_offset": 2048, 00:15:18.574 "data_size": 63488 00:15:18.574 } 00:15:18.574 ] 00:15:18.574 }' 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.574 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.144 "name": "raid_bdev1", 00:15:19.144 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:19.144 "strip_size_kb": 64, 00:15:19.144 "state": "online", 00:15:19.144 "raid_level": "raid5f", 00:15:19.144 "superblock": true, 00:15:19.144 "num_base_bdevs": 3, 00:15:19.144 "num_base_bdevs_discovered": 2, 00:15:19.144 "num_base_bdevs_operational": 2, 00:15:19.144 "base_bdevs_list": [ 00:15:19.144 { 00:15:19.144 "name": null, 00:15:19.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.144 "is_configured": false, 00:15:19.144 "data_offset": 0, 00:15:19.144 "data_size": 63488 00:15:19.144 }, 00:15:19.144 { 00:15:19.144 "name": "BaseBdev2", 00:15:19.144 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:19.144 "is_configured": true, 00:15:19.144 "data_offset": 2048, 00:15:19.144 "data_size": 63488 00:15:19.144 }, 00:15:19.144 { 00:15:19.144 "name": "BaseBdev3", 00:15:19.144 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:19.144 "is_configured": true, 00:15:19.144 "data_offset": 2048, 00:15:19.144 "data_size": 63488 00:15:19.144 } 00:15:19.144 ] 00:15:19.144 }' 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.144 [2024-11-20 03:22:08.723336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.144 [2024-11-20 03:22:08.723517] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.144 [2024-11-20 03:22:08.723535] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.144 request: 00:15:19.144 { 00:15:19.144 "base_bdev": "BaseBdev1", 00:15:19.144 "raid_bdev": "raid_bdev1", 00:15:19.144 "method": "bdev_raid_add_base_bdev", 00:15:19.144 "req_id": 1 00:15:19.144 } 00:15:19.144 Got JSON-RPC error response 00:15:19.144 response: 00:15:19.144 { 00:15:19.144 "code": -22, 00:15:19.144 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:19.144 } 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.144 03:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.526 "name": "raid_bdev1", 00:15:20.526 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:20.526 "strip_size_kb": 64, 00:15:20.526 "state": "online", 00:15:20.526 "raid_level": "raid5f", 00:15:20.526 "superblock": true, 00:15:20.526 "num_base_bdevs": 3, 00:15:20.526 "num_base_bdevs_discovered": 2, 00:15:20.526 "num_base_bdevs_operational": 2, 00:15:20.526 "base_bdevs_list": [ 00:15:20.526 { 00:15:20.526 "name": null, 00:15:20.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.526 "is_configured": false, 00:15:20.526 "data_offset": 0, 00:15:20.526 "data_size": 63488 00:15:20.526 }, 00:15:20.526 { 00:15:20.526 "name": "BaseBdev2", 00:15:20.526 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:20.526 "is_configured": true, 00:15:20.526 "data_offset": 2048, 00:15:20.526 "data_size": 63488 00:15:20.526 }, 00:15:20.526 { 00:15:20.526 "name": "BaseBdev3", 00:15:20.526 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:20.526 "is_configured": true, 00:15:20.526 "data_offset": 2048, 00:15:20.526 "data_size": 63488 00:15:20.526 } 00:15:20.526 ] 00:15:20.526 }' 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.526 03:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.526 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.787 "name": "raid_bdev1", 00:15:20.787 "uuid": "d7129219-8369-4d08-8d27-71424b9042cb", 00:15:20.787 "strip_size_kb": 64, 00:15:20.787 "state": "online", 00:15:20.787 "raid_level": "raid5f", 00:15:20.787 "superblock": true, 00:15:20.787 "num_base_bdevs": 3, 00:15:20.787 "num_base_bdevs_discovered": 2, 00:15:20.787 "num_base_bdevs_operational": 2, 00:15:20.787 "base_bdevs_list": [ 00:15:20.787 { 00:15:20.787 "name": null, 00:15:20.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.787 "is_configured": false, 00:15:20.787 "data_offset": 0, 00:15:20.787 "data_size": 63488 00:15:20.787 }, 00:15:20.787 { 00:15:20.787 "name": "BaseBdev2", 00:15:20.787 "uuid": "93de5b65-513c-5db0-a213-3f7f265973f1", 00:15:20.787 "is_configured": true, 00:15:20.787 "data_offset": 2048, 00:15:20.787 "data_size": 63488 00:15:20.787 }, 00:15:20.787 { 00:15:20.787 "name": "BaseBdev3", 00:15:20.787 "uuid": "880ee9a4-9189-5e68-8b3d-42378ecba69c", 00:15:20.787 "is_configured": true, 00:15:20.787 "data_offset": 2048, 00:15:20.787 "data_size": 63488 00:15:20.787 } 00:15:20.787 ] 00:15:20.787 }' 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81824 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81824 ']' 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81824 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81824 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.787 killing process with pid 81824 00:15:20.787 Received shutdown signal, test time was about 60.000000 seconds 00:15:20.787 00:15:20.787 Latency(us) 00:15:20.787 [2024-11-20T03:22:10.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.787 [2024-11-20T03:22:10.422Z] =================================================================================================================== 00:15:20.787 [2024-11-20T03:22:10.422Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81824' 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81824 00:15:20.787 [2024-11-20 03:22:10.313151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.787 [2024-11-20 03:22:10.313293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.787 03:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81824 00:15:20.787 [2024-11-20 03:22:10.313365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.787 [2024-11-20 03:22:10.313380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:21.356 [2024-11-20 03:22:10.707328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.296 ************************************ 00:15:22.296 END TEST raid5f_rebuild_test_sb 00:15:22.296 ************************************ 00:15:22.296 03:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.296 00:15:22.296 real 0m23.190s 00:15:22.296 user 0m29.768s 00:15:22.296 sys 0m2.688s 00:15:22.296 03:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.296 03:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.296 03:22:11 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:22.296 03:22:11 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:22.296 03:22:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.296 03:22:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.296 03:22:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.296 ************************************ 00:15:22.296 START TEST raid5f_state_function_test 00:15:22.296 ************************************ 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82572 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82572' 00:15:22.296 Process raid pid: 82572 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82572 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82572 ']' 00:15:22.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.296 03:22:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.556 [2024-11-20 03:22:11.943813] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:22.556 [2024-11-20 03:22:11.944009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.556 [2024-11-20 03:22:12.119605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.817 [2024-11-20 03:22:12.234489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.817 [2024-11-20 03:22:12.443454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.817 [2024-11-20 03:22:12.443489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 [2024-11-20 03:22:12.785987] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.387 [2024-11-20 03:22:12.786043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.387 [2024-11-20 03:22:12.786053] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.387 [2024-11-20 03:22:12.786079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.387 [2024-11-20 03:22:12.786086] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.387 [2024-11-20 03:22:12.786095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.387 [2024-11-20 03:22:12.786101] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.387 [2024-11-20 03:22:12.786110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.387 "name": "Existed_Raid", 00:15:23.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.387 "strip_size_kb": 64, 00:15:23.387 "state": "configuring", 00:15:23.387 "raid_level": "raid5f", 00:15:23.387 "superblock": false, 00:15:23.387 "num_base_bdevs": 4, 00:15:23.387 "num_base_bdevs_discovered": 0, 00:15:23.387 "num_base_bdevs_operational": 4, 00:15:23.387 "base_bdevs_list": [ 00:15:23.387 { 00:15:23.387 "name": "BaseBdev1", 00:15:23.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.387 "is_configured": false, 00:15:23.387 "data_offset": 0, 00:15:23.387 "data_size": 0 00:15:23.387 }, 00:15:23.387 { 00:15:23.387 "name": "BaseBdev2", 00:15:23.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.387 "is_configured": false, 00:15:23.387 "data_offset": 0, 00:15:23.387 "data_size": 0 00:15:23.387 }, 00:15:23.387 { 00:15:23.387 "name": "BaseBdev3", 00:15:23.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.387 "is_configured": false, 00:15:23.387 "data_offset": 0, 00:15:23.387 "data_size": 0 00:15:23.387 }, 00:15:23.387 { 00:15:23.387 "name": "BaseBdev4", 00:15:23.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.387 "is_configured": false, 00:15:23.387 "data_offset": 0, 00:15:23.387 "data_size": 0 00:15:23.387 } 00:15:23.387 ] 00:15:23.387 }' 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.387 03:22:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.647 [2024-11-20 03:22:13.225162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.647 [2024-11-20 03:22:13.225260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.647 [2024-11-20 03:22:13.233139] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.647 [2024-11-20 03:22:13.233221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.647 [2024-11-20 03:22:13.233249] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.647 [2024-11-20 03:22:13.233272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.647 [2024-11-20 03:22:13.233290] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.647 [2024-11-20 03:22:13.233311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.647 [2024-11-20 03:22:13.233329] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.647 [2024-11-20 03:22:13.233350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.647 [2024-11-20 03:22:13.277145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.647 BaseBdev1 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:23.647 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.907 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.907 [ 00:15:23.907 { 00:15:23.907 "name": "BaseBdev1", 00:15:23.907 "aliases": [ 00:15:23.907 "ecffe8b4-d23d-44e2-858c-cd741cdc71be" 00:15:23.907 ], 00:15:23.907 "product_name": "Malloc disk", 00:15:23.907 "block_size": 512, 00:15:23.907 "num_blocks": 65536, 00:15:23.907 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:23.907 "assigned_rate_limits": { 00:15:23.907 "rw_ios_per_sec": 0, 00:15:23.907 "rw_mbytes_per_sec": 0, 00:15:23.907 "r_mbytes_per_sec": 0, 00:15:23.907 "w_mbytes_per_sec": 0 00:15:23.907 }, 00:15:23.907 "claimed": true, 00:15:23.908 "claim_type": "exclusive_write", 00:15:23.908 "zoned": false, 00:15:23.908 "supported_io_types": { 00:15:23.908 "read": true, 00:15:23.908 "write": true, 00:15:23.908 "unmap": true, 00:15:23.908 "flush": true, 00:15:23.908 "reset": true, 00:15:23.908 "nvme_admin": false, 00:15:23.908 "nvme_io": false, 00:15:23.908 "nvme_io_md": false, 00:15:23.908 "write_zeroes": true, 00:15:23.908 "zcopy": true, 00:15:23.908 "get_zone_info": false, 00:15:23.908 "zone_management": false, 00:15:23.908 "zone_append": false, 00:15:23.908 "compare": false, 00:15:23.908 "compare_and_write": false, 00:15:23.908 "abort": true, 00:15:23.908 "seek_hole": false, 00:15:23.908 "seek_data": false, 00:15:23.908 "copy": true, 00:15:23.908 "nvme_iov_md": false 00:15:23.908 }, 00:15:23.908 "memory_domains": [ 00:15:23.908 { 00:15:23.908 "dma_device_id": "system", 00:15:23.908 "dma_device_type": 1 00:15:23.908 }, 00:15:23.908 { 00:15:23.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.908 "dma_device_type": 2 00:15:23.908 } 00:15:23.908 ], 00:15:23.908 "driver_specific": {} 00:15:23.908 } 00:15:23.908 ] 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.908 "name": "Existed_Raid", 00:15:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.908 "strip_size_kb": 64, 00:15:23.908 "state": "configuring", 00:15:23.908 "raid_level": "raid5f", 00:15:23.908 "superblock": false, 00:15:23.908 "num_base_bdevs": 4, 00:15:23.908 "num_base_bdevs_discovered": 1, 00:15:23.908 "num_base_bdevs_operational": 4, 00:15:23.908 "base_bdevs_list": [ 00:15:23.908 { 00:15:23.908 "name": "BaseBdev1", 00:15:23.908 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:23.908 "is_configured": true, 00:15:23.908 "data_offset": 0, 00:15:23.908 "data_size": 65536 00:15:23.908 }, 00:15:23.908 { 00:15:23.908 "name": "BaseBdev2", 00:15:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.908 "is_configured": false, 00:15:23.908 "data_offset": 0, 00:15:23.908 "data_size": 0 00:15:23.908 }, 00:15:23.908 { 00:15:23.908 "name": "BaseBdev3", 00:15:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.908 "is_configured": false, 00:15:23.908 "data_offset": 0, 00:15:23.908 "data_size": 0 00:15:23.908 }, 00:15:23.908 { 00:15:23.908 "name": "BaseBdev4", 00:15:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.908 "is_configured": false, 00:15:23.908 "data_offset": 0, 00:15:23.908 "data_size": 0 00:15:23.908 } 00:15:23.908 ] 00:15:23.908 }' 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.908 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.176 [2024-11-20 03:22:13.744370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.176 [2024-11-20 03:22:13.744424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.176 [2024-11-20 03:22:13.752403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.176 [2024-11-20 03:22:13.754267] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.176 [2024-11-20 03:22:13.754371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.176 [2024-11-20 03:22:13.754408] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.176 [2024-11-20 03:22:13.754445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.176 [2024-11-20 03:22:13.754468] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.176 [2024-11-20 03:22:13.754509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.176 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.448 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.448 "name": "Existed_Raid", 00:15:24.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.448 "strip_size_kb": 64, 00:15:24.448 "state": "configuring", 00:15:24.448 "raid_level": "raid5f", 00:15:24.448 "superblock": false, 00:15:24.448 "num_base_bdevs": 4, 00:15:24.448 "num_base_bdevs_discovered": 1, 00:15:24.448 "num_base_bdevs_operational": 4, 00:15:24.448 "base_bdevs_list": [ 00:15:24.448 { 00:15:24.448 "name": "BaseBdev1", 00:15:24.448 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:24.448 "is_configured": true, 00:15:24.448 "data_offset": 0, 00:15:24.448 "data_size": 65536 00:15:24.448 }, 00:15:24.448 { 00:15:24.448 "name": "BaseBdev2", 00:15:24.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.448 "is_configured": false, 00:15:24.448 "data_offset": 0, 00:15:24.448 "data_size": 0 00:15:24.448 }, 00:15:24.448 { 00:15:24.448 "name": "BaseBdev3", 00:15:24.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.448 "is_configured": false, 00:15:24.448 "data_offset": 0, 00:15:24.448 "data_size": 0 00:15:24.448 }, 00:15:24.448 { 00:15:24.448 "name": "BaseBdev4", 00:15:24.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.448 "is_configured": false, 00:15:24.448 "data_offset": 0, 00:15:24.448 "data_size": 0 00:15:24.448 } 00:15:24.448 ] 00:15:24.448 }' 00:15:24.448 03:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.448 03:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.708 [2024-11-20 03:22:14.226260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.708 BaseBdev2 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.708 [ 00:15:24.708 { 00:15:24.708 "name": "BaseBdev2", 00:15:24.708 "aliases": [ 00:15:24.708 "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8" 00:15:24.708 ], 00:15:24.708 "product_name": "Malloc disk", 00:15:24.708 "block_size": 512, 00:15:24.708 "num_blocks": 65536, 00:15:24.708 "uuid": "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8", 00:15:24.708 "assigned_rate_limits": { 00:15:24.708 "rw_ios_per_sec": 0, 00:15:24.708 "rw_mbytes_per_sec": 0, 00:15:24.708 "r_mbytes_per_sec": 0, 00:15:24.708 "w_mbytes_per_sec": 0 00:15:24.708 }, 00:15:24.708 "claimed": true, 00:15:24.708 "claim_type": "exclusive_write", 00:15:24.708 "zoned": false, 00:15:24.708 "supported_io_types": { 00:15:24.708 "read": true, 00:15:24.708 "write": true, 00:15:24.708 "unmap": true, 00:15:24.708 "flush": true, 00:15:24.708 "reset": true, 00:15:24.708 "nvme_admin": false, 00:15:24.708 "nvme_io": false, 00:15:24.708 "nvme_io_md": false, 00:15:24.708 "write_zeroes": true, 00:15:24.708 "zcopy": true, 00:15:24.708 "get_zone_info": false, 00:15:24.708 "zone_management": false, 00:15:24.708 "zone_append": false, 00:15:24.708 "compare": false, 00:15:24.708 "compare_and_write": false, 00:15:24.708 "abort": true, 00:15:24.708 "seek_hole": false, 00:15:24.708 "seek_data": false, 00:15:24.708 "copy": true, 00:15:24.708 "nvme_iov_md": false 00:15:24.708 }, 00:15:24.708 "memory_domains": [ 00:15:24.708 { 00:15:24.708 "dma_device_id": "system", 00:15:24.708 "dma_device_type": 1 00:15:24.708 }, 00:15:24.708 { 00:15:24.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.708 "dma_device_type": 2 00:15:24.708 } 00:15:24.708 ], 00:15:24.708 "driver_specific": {} 00:15:24.708 } 00:15:24.708 ] 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.708 "name": "Existed_Raid", 00:15:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.708 "strip_size_kb": 64, 00:15:24.708 "state": "configuring", 00:15:24.708 "raid_level": "raid5f", 00:15:24.708 "superblock": false, 00:15:24.708 "num_base_bdevs": 4, 00:15:24.708 "num_base_bdevs_discovered": 2, 00:15:24.708 "num_base_bdevs_operational": 4, 00:15:24.708 "base_bdevs_list": [ 00:15:24.708 { 00:15:24.708 "name": "BaseBdev1", 00:15:24.708 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:24.708 "is_configured": true, 00:15:24.708 "data_offset": 0, 00:15:24.708 "data_size": 65536 00:15:24.708 }, 00:15:24.708 { 00:15:24.708 "name": "BaseBdev2", 00:15:24.708 "uuid": "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8", 00:15:24.708 "is_configured": true, 00:15:24.708 "data_offset": 0, 00:15:24.708 "data_size": 65536 00:15:24.708 }, 00:15:24.708 { 00:15:24.708 "name": "BaseBdev3", 00:15:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.708 "is_configured": false, 00:15:24.708 "data_offset": 0, 00:15:24.708 "data_size": 0 00:15:24.708 }, 00:15:24.708 { 00:15:24.708 "name": "BaseBdev4", 00:15:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.708 "is_configured": false, 00:15:24.708 "data_offset": 0, 00:15:24.708 "data_size": 0 00:15:24.708 } 00:15:24.708 ] 00:15:24.708 }' 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.708 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.278 [2024-11-20 03:22:14.760683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.278 BaseBdev3 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.278 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.278 [ 00:15:25.278 { 00:15:25.278 "name": "BaseBdev3", 00:15:25.278 "aliases": [ 00:15:25.278 "f99e6cd6-7667-4beb-a459-633d96f5cbdb" 00:15:25.278 ], 00:15:25.278 "product_name": "Malloc disk", 00:15:25.278 "block_size": 512, 00:15:25.278 "num_blocks": 65536, 00:15:25.278 "uuid": "f99e6cd6-7667-4beb-a459-633d96f5cbdb", 00:15:25.278 "assigned_rate_limits": { 00:15:25.278 "rw_ios_per_sec": 0, 00:15:25.278 "rw_mbytes_per_sec": 0, 00:15:25.278 "r_mbytes_per_sec": 0, 00:15:25.278 "w_mbytes_per_sec": 0 00:15:25.278 }, 00:15:25.279 "claimed": true, 00:15:25.279 "claim_type": "exclusive_write", 00:15:25.279 "zoned": false, 00:15:25.279 "supported_io_types": { 00:15:25.279 "read": true, 00:15:25.279 "write": true, 00:15:25.279 "unmap": true, 00:15:25.279 "flush": true, 00:15:25.279 "reset": true, 00:15:25.279 "nvme_admin": false, 00:15:25.279 "nvme_io": false, 00:15:25.279 "nvme_io_md": false, 00:15:25.279 "write_zeroes": true, 00:15:25.279 "zcopy": true, 00:15:25.279 "get_zone_info": false, 00:15:25.279 "zone_management": false, 00:15:25.279 "zone_append": false, 00:15:25.279 "compare": false, 00:15:25.279 "compare_and_write": false, 00:15:25.279 "abort": true, 00:15:25.279 "seek_hole": false, 00:15:25.279 "seek_data": false, 00:15:25.279 "copy": true, 00:15:25.279 "nvme_iov_md": false 00:15:25.279 }, 00:15:25.279 "memory_domains": [ 00:15:25.279 { 00:15:25.279 "dma_device_id": "system", 00:15:25.279 "dma_device_type": 1 00:15:25.279 }, 00:15:25.279 { 00:15:25.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.279 "dma_device_type": 2 00:15:25.279 } 00:15:25.279 ], 00:15:25.279 "driver_specific": {} 00:15:25.279 } 00:15:25.279 ] 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.279 "name": "Existed_Raid", 00:15:25.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.279 "strip_size_kb": 64, 00:15:25.279 "state": "configuring", 00:15:25.279 "raid_level": "raid5f", 00:15:25.279 "superblock": false, 00:15:25.279 "num_base_bdevs": 4, 00:15:25.279 "num_base_bdevs_discovered": 3, 00:15:25.279 "num_base_bdevs_operational": 4, 00:15:25.279 "base_bdevs_list": [ 00:15:25.279 { 00:15:25.279 "name": "BaseBdev1", 00:15:25.279 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:25.279 "is_configured": true, 00:15:25.279 "data_offset": 0, 00:15:25.279 "data_size": 65536 00:15:25.279 }, 00:15:25.279 { 00:15:25.279 "name": "BaseBdev2", 00:15:25.279 "uuid": "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8", 00:15:25.279 "is_configured": true, 00:15:25.279 "data_offset": 0, 00:15:25.279 "data_size": 65536 00:15:25.279 }, 00:15:25.279 { 00:15:25.279 "name": "BaseBdev3", 00:15:25.279 "uuid": "f99e6cd6-7667-4beb-a459-633d96f5cbdb", 00:15:25.279 "is_configured": true, 00:15:25.279 "data_offset": 0, 00:15:25.279 "data_size": 65536 00:15:25.279 }, 00:15:25.279 { 00:15:25.279 "name": "BaseBdev4", 00:15:25.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.279 "is_configured": false, 00:15:25.279 "data_offset": 0, 00:15:25.279 "data_size": 0 00:15:25.279 } 00:15:25.279 ] 00:15:25.279 }' 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.279 03:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.849 [2024-11-20 03:22:15.295884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:25.849 [2024-11-20 03:22:15.295951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:25.849 [2024-11-20 03:22:15.295960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:25.849 [2024-11-20 03:22:15.296195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.849 [2024-11-20 03:22:15.303459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:25.849 [2024-11-20 03:22:15.303481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:25.849 [2024-11-20 03:22:15.303758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.849 BaseBdev4 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.849 [ 00:15:25.849 { 00:15:25.849 "name": "BaseBdev4", 00:15:25.849 "aliases": [ 00:15:25.849 "606541f4-e37e-4118-8710-35eb476478dc" 00:15:25.849 ], 00:15:25.849 "product_name": "Malloc disk", 00:15:25.849 "block_size": 512, 00:15:25.849 "num_blocks": 65536, 00:15:25.849 "uuid": "606541f4-e37e-4118-8710-35eb476478dc", 00:15:25.849 "assigned_rate_limits": { 00:15:25.849 "rw_ios_per_sec": 0, 00:15:25.849 "rw_mbytes_per_sec": 0, 00:15:25.849 "r_mbytes_per_sec": 0, 00:15:25.849 "w_mbytes_per_sec": 0 00:15:25.849 }, 00:15:25.849 "claimed": true, 00:15:25.849 "claim_type": "exclusive_write", 00:15:25.849 "zoned": false, 00:15:25.849 "supported_io_types": { 00:15:25.849 "read": true, 00:15:25.849 "write": true, 00:15:25.849 "unmap": true, 00:15:25.849 "flush": true, 00:15:25.849 "reset": true, 00:15:25.849 "nvme_admin": false, 00:15:25.849 "nvme_io": false, 00:15:25.849 "nvme_io_md": false, 00:15:25.849 "write_zeroes": true, 00:15:25.849 "zcopy": true, 00:15:25.849 "get_zone_info": false, 00:15:25.849 "zone_management": false, 00:15:25.849 "zone_append": false, 00:15:25.849 "compare": false, 00:15:25.849 "compare_and_write": false, 00:15:25.849 "abort": true, 00:15:25.849 "seek_hole": false, 00:15:25.849 "seek_data": false, 00:15:25.849 "copy": true, 00:15:25.849 "nvme_iov_md": false 00:15:25.849 }, 00:15:25.849 "memory_domains": [ 00:15:25.849 { 00:15:25.849 "dma_device_id": "system", 00:15:25.849 "dma_device_type": 1 00:15:25.849 }, 00:15:25.849 { 00:15:25.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.849 "dma_device_type": 2 00:15:25.849 } 00:15:25.849 ], 00:15:25.849 "driver_specific": {} 00:15:25.849 } 00:15:25.849 ] 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.849 "name": "Existed_Raid", 00:15:25.849 "uuid": "532bc142-f52d-4d4d-bd7f-a82148bba7ca", 00:15:25.849 "strip_size_kb": 64, 00:15:25.849 "state": "online", 00:15:25.849 "raid_level": "raid5f", 00:15:25.849 "superblock": false, 00:15:25.849 "num_base_bdevs": 4, 00:15:25.849 "num_base_bdevs_discovered": 4, 00:15:25.849 "num_base_bdevs_operational": 4, 00:15:25.849 "base_bdevs_list": [ 00:15:25.849 { 00:15:25.849 "name": "BaseBdev1", 00:15:25.849 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:25.849 "is_configured": true, 00:15:25.849 "data_offset": 0, 00:15:25.849 "data_size": 65536 00:15:25.849 }, 00:15:25.849 { 00:15:25.849 "name": "BaseBdev2", 00:15:25.849 "uuid": "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8", 00:15:25.849 "is_configured": true, 00:15:25.849 "data_offset": 0, 00:15:25.849 "data_size": 65536 00:15:25.849 }, 00:15:25.849 { 00:15:25.849 "name": "BaseBdev3", 00:15:25.849 "uuid": "f99e6cd6-7667-4beb-a459-633d96f5cbdb", 00:15:25.849 "is_configured": true, 00:15:25.849 "data_offset": 0, 00:15:25.849 "data_size": 65536 00:15:25.849 }, 00:15:25.849 { 00:15:25.849 "name": "BaseBdev4", 00:15:25.849 "uuid": "606541f4-e37e-4118-8710-35eb476478dc", 00:15:25.849 "is_configured": true, 00:15:25.849 "data_offset": 0, 00:15:25.849 "data_size": 65536 00:15:25.849 } 00:15:25.849 ] 00:15:25.849 }' 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.849 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.109 [2024-11-20 03:22:15.711580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.109 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.370 "name": "Existed_Raid", 00:15:26.370 "aliases": [ 00:15:26.370 "532bc142-f52d-4d4d-bd7f-a82148bba7ca" 00:15:26.370 ], 00:15:26.370 "product_name": "Raid Volume", 00:15:26.370 "block_size": 512, 00:15:26.370 "num_blocks": 196608, 00:15:26.370 "uuid": "532bc142-f52d-4d4d-bd7f-a82148bba7ca", 00:15:26.370 "assigned_rate_limits": { 00:15:26.370 "rw_ios_per_sec": 0, 00:15:26.370 "rw_mbytes_per_sec": 0, 00:15:26.370 "r_mbytes_per_sec": 0, 00:15:26.370 "w_mbytes_per_sec": 0 00:15:26.370 }, 00:15:26.370 "claimed": false, 00:15:26.370 "zoned": false, 00:15:26.370 "supported_io_types": { 00:15:26.370 "read": true, 00:15:26.370 "write": true, 00:15:26.370 "unmap": false, 00:15:26.370 "flush": false, 00:15:26.370 "reset": true, 00:15:26.370 "nvme_admin": false, 00:15:26.370 "nvme_io": false, 00:15:26.370 "nvme_io_md": false, 00:15:26.370 "write_zeroes": true, 00:15:26.370 "zcopy": false, 00:15:26.370 "get_zone_info": false, 00:15:26.370 "zone_management": false, 00:15:26.370 "zone_append": false, 00:15:26.370 "compare": false, 00:15:26.370 "compare_and_write": false, 00:15:26.370 "abort": false, 00:15:26.370 "seek_hole": false, 00:15:26.370 "seek_data": false, 00:15:26.370 "copy": false, 00:15:26.370 "nvme_iov_md": false 00:15:26.370 }, 00:15:26.370 "driver_specific": { 00:15:26.370 "raid": { 00:15:26.370 "uuid": "532bc142-f52d-4d4d-bd7f-a82148bba7ca", 00:15:26.370 "strip_size_kb": 64, 00:15:26.370 "state": "online", 00:15:26.370 "raid_level": "raid5f", 00:15:26.370 "superblock": false, 00:15:26.370 "num_base_bdevs": 4, 00:15:26.370 "num_base_bdevs_discovered": 4, 00:15:26.370 "num_base_bdevs_operational": 4, 00:15:26.370 "base_bdevs_list": [ 00:15:26.370 { 00:15:26.370 "name": "BaseBdev1", 00:15:26.370 "uuid": "ecffe8b4-d23d-44e2-858c-cd741cdc71be", 00:15:26.370 "is_configured": true, 00:15:26.370 "data_offset": 0, 00:15:26.370 "data_size": 65536 00:15:26.370 }, 00:15:26.370 { 00:15:26.370 "name": "BaseBdev2", 00:15:26.370 "uuid": "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8", 00:15:26.370 "is_configured": true, 00:15:26.370 "data_offset": 0, 00:15:26.370 "data_size": 65536 00:15:26.370 }, 00:15:26.370 { 00:15:26.370 "name": "BaseBdev3", 00:15:26.370 "uuid": "f99e6cd6-7667-4beb-a459-633d96f5cbdb", 00:15:26.370 "is_configured": true, 00:15:26.370 "data_offset": 0, 00:15:26.370 "data_size": 65536 00:15:26.370 }, 00:15:26.370 { 00:15:26.370 "name": "BaseBdev4", 00:15:26.370 "uuid": "606541f4-e37e-4118-8710-35eb476478dc", 00:15:26.370 "is_configured": true, 00:15:26.370 "data_offset": 0, 00:15:26.370 "data_size": 65536 00:15:26.370 } 00:15:26.370 ] 00:15:26.370 } 00:15:26.370 } 00:15:26.370 }' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:26.370 BaseBdev2 00:15:26.370 BaseBdev3 00:15:26.370 BaseBdev4' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.370 03:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.630 [2024-11-20 03:22:16.042794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.630 "name": "Existed_Raid", 00:15:26.630 "uuid": "532bc142-f52d-4d4d-bd7f-a82148bba7ca", 00:15:26.630 "strip_size_kb": 64, 00:15:26.630 "state": "online", 00:15:26.630 "raid_level": "raid5f", 00:15:26.630 "superblock": false, 00:15:26.630 "num_base_bdevs": 4, 00:15:26.630 "num_base_bdevs_discovered": 3, 00:15:26.630 "num_base_bdevs_operational": 3, 00:15:26.630 "base_bdevs_list": [ 00:15:26.630 { 00:15:26.630 "name": null, 00:15:26.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.630 "is_configured": false, 00:15:26.630 "data_offset": 0, 00:15:26.630 "data_size": 65536 00:15:26.630 }, 00:15:26.630 { 00:15:26.630 "name": "BaseBdev2", 00:15:26.630 "uuid": "0c58e1ed-461b-448d-bf7c-bc3ba57a20e8", 00:15:26.630 "is_configured": true, 00:15:26.630 "data_offset": 0, 00:15:26.630 "data_size": 65536 00:15:26.630 }, 00:15:26.630 { 00:15:26.630 "name": "BaseBdev3", 00:15:26.630 "uuid": "f99e6cd6-7667-4beb-a459-633d96f5cbdb", 00:15:26.630 "is_configured": true, 00:15:26.630 "data_offset": 0, 00:15:26.630 "data_size": 65536 00:15:26.630 }, 00:15:26.630 { 00:15:26.630 "name": "BaseBdev4", 00:15:26.630 "uuid": "606541f4-e37e-4118-8710-35eb476478dc", 00:15:26.630 "is_configured": true, 00:15:26.630 "data_offset": 0, 00:15:26.630 "data_size": 65536 00:15:26.630 } 00:15:26.630 ] 00:15:26.630 }' 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.630 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.200 [2024-11-20 03:22:16.599863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.200 [2024-11-20 03:22:16.599979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.200 [2024-11-20 03:22:16.695867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.200 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.200 [2024-11-20 03:22:16.755813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.460 03:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.460 [2024-11-20 03:22:16.909417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:27.460 [2024-11-20 03:22:16.909466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.460 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.721 BaseBdev2 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.721 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.721 [ 00:15:27.721 { 00:15:27.721 "name": "BaseBdev2", 00:15:27.721 "aliases": [ 00:15:27.721 "c90a459d-b23c-48a7-aeb5-d50c151e550b" 00:15:27.721 ], 00:15:27.721 "product_name": "Malloc disk", 00:15:27.721 "block_size": 512, 00:15:27.721 "num_blocks": 65536, 00:15:27.721 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:27.721 "assigned_rate_limits": { 00:15:27.721 "rw_ios_per_sec": 0, 00:15:27.721 "rw_mbytes_per_sec": 0, 00:15:27.722 "r_mbytes_per_sec": 0, 00:15:27.722 "w_mbytes_per_sec": 0 00:15:27.722 }, 00:15:27.722 "claimed": false, 00:15:27.722 "zoned": false, 00:15:27.722 "supported_io_types": { 00:15:27.722 "read": true, 00:15:27.722 "write": true, 00:15:27.722 "unmap": true, 00:15:27.722 "flush": true, 00:15:27.722 "reset": true, 00:15:27.722 "nvme_admin": false, 00:15:27.722 "nvme_io": false, 00:15:27.722 "nvme_io_md": false, 00:15:27.722 "write_zeroes": true, 00:15:27.722 "zcopy": true, 00:15:27.722 "get_zone_info": false, 00:15:27.722 "zone_management": false, 00:15:27.722 "zone_append": false, 00:15:27.722 "compare": false, 00:15:27.722 "compare_and_write": false, 00:15:27.722 "abort": true, 00:15:27.722 "seek_hole": false, 00:15:27.722 "seek_data": false, 00:15:27.722 "copy": true, 00:15:27.722 "nvme_iov_md": false 00:15:27.722 }, 00:15:27.722 "memory_domains": [ 00:15:27.722 { 00:15:27.722 "dma_device_id": "system", 00:15:27.722 "dma_device_type": 1 00:15:27.722 }, 00:15:27.722 { 00:15:27.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.722 "dma_device_type": 2 00:15:27.722 } 00:15:27.722 ], 00:15:27.722 "driver_specific": {} 00:15:27.722 } 00:15:27.722 ] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 BaseBdev3 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 [ 00:15:27.722 { 00:15:27.722 "name": "BaseBdev3", 00:15:27.722 "aliases": [ 00:15:27.722 "5b97f244-bca8-49cf-ab56-49c27eb05964" 00:15:27.722 ], 00:15:27.722 "product_name": "Malloc disk", 00:15:27.722 "block_size": 512, 00:15:27.722 "num_blocks": 65536, 00:15:27.722 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:27.722 "assigned_rate_limits": { 00:15:27.722 "rw_ios_per_sec": 0, 00:15:27.722 "rw_mbytes_per_sec": 0, 00:15:27.722 "r_mbytes_per_sec": 0, 00:15:27.722 "w_mbytes_per_sec": 0 00:15:27.722 }, 00:15:27.722 "claimed": false, 00:15:27.722 "zoned": false, 00:15:27.722 "supported_io_types": { 00:15:27.722 "read": true, 00:15:27.722 "write": true, 00:15:27.722 "unmap": true, 00:15:27.722 "flush": true, 00:15:27.722 "reset": true, 00:15:27.722 "nvme_admin": false, 00:15:27.722 "nvme_io": false, 00:15:27.722 "nvme_io_md": false, 00:15:27.722 "write_zeroes": true, 00:15:27.722 "zcopy": true, 00:15:27.722 "get_zone_info": false, 00:15:27.722 "zone_management": false, 00:15:27.722 "zone_append": false, 00:15:27.722 "compare": false, 00:15:27.722 "compare_and_write": false, 00:15:27.722 "abort": true, 00:15:27.722 "seek_hole": false, 00:15:27.722 "seek_data": false, 00:15:27.722 "copy": true, 00:15:27.722 "nvme_iov_md": false 00:15:27.722 }, 00:15:27.722 "memory_domains": [ 00:15:27.722 { 00:15:27.722 "dma_device_id": "system", 00:15:27.722 "dma_device_type": 1 00:15:27.722 }, 00:15:27.722 { 00:15:27.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.722 "dma_device_type": 2 00:15:27.722 } 00:15:27.722 ], 00:15:27.722 "driver_specific": {} 00:15:27.722 } 00:15:27.722 ] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 BaseBdev4 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 [ 00:15:27.722 { 00:15:27.722 "name": "BaseBdev4", 00:15:27.722 "aliases": [ 00:15:27.722 "56ade881-0fc1-426e-90c9-43e1bbe9e6cf" 00:15:27.722 ], 00:15:27.722 "product_name": "Malloc disk", 00:15:27.722 "block_size": 512, 00:15:27.722 "num_blocks": 65536, 00:15:27.722 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:27.722 "assigned_rate_limits": { 00:15:27.722 "rw_ios_per_sec": 0, 00:15:27.722 "rw_mbytes_per_sec": 0, 00:15:27.722 "r_mbytes_per_sec": 0, 00:15:27.722 "w_mbytes_per_sec": 0 00:15:27.722 }, 00:15:27.722 "claimed": false, 00:15:27.722 "zoned": false, 00:15:27.722 "supported_io_types": { 00:15:27.722 "read": true, 00:15:27.722 "write": true, 00:15:27.722 "unmap": true, 00:15:27.722 "flush": true, 00:15:27.722 "reset": true, 00:15:27.722 "nvme_admin": false, 00:15:27.722 "nvme_io": false, 00:15:27.722 "nvme_io_md": false, 00:15:27.722 "write_zeroes": true, 00:15:27.722 "zcopy": true, 00:15:27.722 "get_zone_info": false, 00:15:27.722 "zone_management": false, 00:15:27.722 "zone_append": false, 00:15:27.722 "compare": false, 00:15:27.722 "compare_and_write": false, 00:15:27.722 "abort": true, 00:15:27.722 "seek_hole": false, 00:15:27.722 "seek_data": false, 00:15:27.722 "copy": true, 00:15:27.722 "nvme_iov_md": false 00:15:27.722 }, 00:15:27.722 "memory_domains": [ 00:15:27.722 { 00:15:27.722 "dma_device_id": "system", 00:15:27.722 "dma_device_type": 1 00:15:27.722 }, 00:15:27.722 { 00:15:27.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.722 "dma_device_type": 2 00:15:27.722 } 00:15:27.722 ], 00:15:27.722 "driver_specific": {} 00:15:27.722 } 00:15:27.722 ] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.722 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 [2024-11-20 03:22:17.303048] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.722 [2024-11-20 03:22:17.303152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.722 [2024-11-20 03:22:17.303193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.722 [2024-11-20 03:22:17.304990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.723 [2024-11-20 03:22:17.305094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.723 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.983 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.983 "name": "Existed_Raid", 00:15:27.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.983 "strip_size_kb": 64, 00:15:27.983 "state": "configuring", 00:15:27.983 "raid_level": "raid5f", 00:15:27.983 "superblock": false, 00:15:27.983 "num_base_bdevs": 4, 00:15:27.983 "num_base_bdevs_discovered": 3, 00:15:27.983 "num_base_bdevs_operational": 4, 00:15:27.983 "base_bdevs_list": [ 00:15:27.983 { 00:15:27.983 "name": "BaseBdev1", 00:15:27.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.983 "is_configured": false, 00:15:27.983 "data_offset": 0, 00:15:27.983 "data_size": 0 00:15:27.983 }, 00:15:27.983 { 00:15:27.983 "name": "BaseBdev2", 00:15:27.983 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:27.983 "is_configured": true, 00:15:27.983 "data_offset": 0, 00:15:27.983 "data_size": 65536 00:15:27.983 }, 00:15:27.983 { 00:15:27.983 "name": "BaseBdev3", 00:15:27.983 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:27.983 "is_configured": true, 00:15:27.983 "data_offset": 0, 00:15:27.983 "data_size": 65536 00:15:27.983 }, 00:15:27.983 { 00:15:27.983 "name": "BaseBdev4", 00:15:27.983 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:27.983 "is_configured": true, 00:15:27.983 "data_offset": 0, 00:15:27.983 "data_size": 65536 00:15:27.983 } 00:15:27.983 ] 00:15:27.983 }' 00:15:27.983 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.983 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.242 [2024-11-20 03:22:17.758337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.242 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.243 "name": "Existed_Raid", 00:15:28.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.243 "strip_size_kb": 64, 00:15:28.243 "state": "configuring", 00:15:28.243 "raid_level": "raid5f", 00:15:28.243 "superblock": false, 00:15:28.243 "num_base_bdevs": 4, 00:15:28.243 "num_base_bdevs_discovered": 2, 00:15:28.243 "num_base_bdevs_operational": 4, 00:15:28.243 "base_bdevs_list": [ 00:15:28.243 { 00:15:28.243 "name": "BaseBdev1", 00:15:28.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.243 "is_configured": false, 00:15:28.243 "data_offset": 0, 00:15:28.243 "data_size": 0 00:15:28.243 }, 00:15:28.243 { 00:15:28.243 "name": null, 00:15:28.243 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:28.243 "is_configured": false, 00:15:28.243 "data_offset": 0, 00:15:28.243 "data_size": 65536 00:15:28.243 }, 00:15:28.243 { 00:15:28.243 "name": "BaseBdev3", 00:15:28.243 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:28.243 "is_configured": true, 00:15:28.243 "data_offset": 0, 00:15:28.243 "data_size": 65536 00:15:28.243 }, 00:15:28.243 { 00:15:28.243 "name": "BaseBdev4", 00:15:28.243 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:28.243 "is_configured": true, 00:15:28.243 "data_offset": 0, 00:15:28.243 "data_size": 65536 00:15:28.243 } 00:15:28.243 ] 00:15:28.243 }' 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.243 03:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 [2024-11-20 03:22:18.265178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.811 BaseBdev1 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 [ 00:15:28.811 { 00:15:28.811 "name": "BaseBdev1", 00:15:28.811 "aliases": [ 00:15:28.811 "d1a276bb-5447-4a0f-9259-dc03510328a4" 00:15:28.811 ], 00:15:28.811 "product_name": "Malloc disk", 00:15:28.811 "block_size": 512, 00:15:28.811 "num_blocks": 65536, 00:15:28.811 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:28.811 "assigned_rate_limits": { 00:15:28.811 "rw_ios_per_sec": 0, 00:15:28.811 "rw_mbytes_per_sec": 0, 00:15:28.811 "r_mbytes_per_sec": 0, 00:15:28.811 "w_mbytes_per_sec": 0 00:15:28.811 }, 00:15:28.811 "claimed": true, 00:15:28.811 "claim_type": "exclusive_write", 00:15:28.811 "zoned": false, 00:15:28.811 "supported_io_types": { 00:15:28.811 "read": true, 00:15:28.811 "write": true, 00:15:28.811 "unmap": true, 00:15:28.811 "flush": true, 00:15:28.811 "reset": true, 00:15:28.811 "nvme_admin": false, 00:15:28.811 "nvme_io": false, 00:15:28.811 "nvme_io_md": false, 00:15:28.811 "write_zeroes": true, 00:15:28.811 "zcopy": true, 00:15:28.811 "get_zone_info": false, 00:15:28.811 "zone_management": false, 00:15:28.811 "zone_append": false, 00:15:28.811 "compare": false, 00:15:28.811 "compare_and_write": false, 00:15:28.811 "abort": true, 00:15:28.811 "seek_hole": false, 00:15:28.811 "seek_data": false, 00:15:28.811 "copy": true, 00:15:28.811 "nvme_iov_md": false 00:15:28.811 }, 00:15:28.811 "memory_domains": [ 00:15:28.811 { 00:15:28.811 "dma_device_id": "system", 00:15:28.811 "dma_device_type": 1 00:15:28.811 }, 00:15:28.811 { 00:15:28.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.811 "dma_device_type": 2 00:15:28.811 } 00:15:28.811 ], 00:15:28.811 "driver_specific": {} 00:15:28.811 } 00:15:28.811 ] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.811 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.811 "name": "Existed_Raid", 00:15:28.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.811 "strip_size_kb": 64, 00:15:28.811 "state": "configuring", 00:15:28.811 "raid_level": "raid5f", 00:15:28.811 "superblock": false, 00:15:28.811 "num_base_bdevs": 4, 00:15:28.811 "num_base_bdevs_discovered": 3, 00:15:28.811 "num_base_bdevs_operational": 4, 00:15:28.811 "base_bdevs_list": [ 00:15:28.812 { 00:15:28.812 "name": "BaseBdev1", 00:15:28.812 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:28.812 "is_configured": true, 00:15:28.812 "data_offset": 0, 00:15:28.812 "data_size": 65536 00:15:28.812 }, 00:15:28.812 { 00:15:28.812 "name": null, 00:15:28.812 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:28.812 "is_configured": false, 00:15:28.812 "data_offset": 0, 00:15:28.812 "data_size": 65536 00:15:28.812 }, 00:15:28.812 { 00:15:28.812 "name": "BaseBdev3", 00:15:28.812 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:28.812 "is_configured": true, 00:15:28.812 "data_offset": 0, 00:15:28.812 "data_size": 65536 00:15:28.812 }, 00:15:28.812 { 00:15:28.812 "name": "BaseBdev4", 00:15:28.812 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:28.812 "is_configured": true, 00:15:28.812 "data_offset": 0, 00:15:28.812 "data_size": 65536 00:15:28.812 } 00:15:28.812 ] 00:15:28.812 }' 00:15:28.812 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.812 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.380 [2024-11-20 03:22:18.800331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.380 "name": "Existed_Raid", 00:15:29.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.380 "strip_size_kb": 64, 00:15:29.380 "state": "configuring", 00:15:29.380 "raid_level": "raid5f", 00:15:29.380 "superblock": false, 00:15:29.380 "num_base_bdevs": 4, 00:15:29.380 "num_base_bdevs_discovered": 2, 00:15:29.380 "num_base_bdevs_operational": 4, 00:15:29.380 "base_bdevs_list": [ 00:15:29.380 { 00:15:29.380 "name": "BaseBdev1", 00:15:29.380 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:29.380 "is_configured": true, 00:15:29.380 "data_offset": 0, 00:15:29.380 "data_size": 65536 00:15:29.380 }, 00:15:29.380 { 00:15:29.380 "name": null, 00:15:29.380 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:29.380 "is_configured": false, 00:15:29.380 "data_offset": 0, 00:15:29.380 "data_size": 65536 00:15:29.380 }, 00:15:29.380 { 00:15:29.380 "name": null, 00:15:29.380 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:29.380 "is_configured": false, 00:15:29.380 "data_offset": 0, 00:15:29.380 "data_size": 65536 00:15:29.380 }, 00:15:29.380 { 00:15:29.380 "name": "BaseBdev4", 00:15:29.380 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:29.380 "is_configured": true, 00:15:29.380 "data_offset": 0, 00:15:29.380 "data_size": 65536 00:15:29.380 } 00:15:29.380 ] 00:15:29.380 }' 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.380 03:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.640 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.640 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.640 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.640 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.899 [2024-11-20 03:22:19.311495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.899 "name": "Existed_Raid", 00:15:29.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.899 "strip_size_kb": 64, 00:15:29.899 "state": "configuring", 00:15:29.899 "raid_level": "raid5f", 00:15:29.899 "superblock": false, 00:15:29.899 "num_base_bdevs": 4, 00:15:29.899 "num_base_bdevs_discovered": 3, 00:15:29.899 "num_base_bdevs_operational": 4, 00:15:29.899 "base_bdevs_list": [ 00:15:29.899 { 00:15:29.899 "name": "BaseBdev1", 00:15:29.899 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:29.899 "is_configured": true, 00:15:29.899 "data_offset": 0, 00:15:29.899 "data_size": 65536 00:15:29.899 }, 00:15:29.899 { 00:15:29.899 "name": null, 00:15:29.899 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:29.899 "is_configured": false, 00:15:29.899 "data_offset": 0, 00:15:29.899 "data_size": 65536 00:15:29.899 }, 00:15:29.899 { 00:15:29.899 "name": "BaseBdev3", 00:15:29.899 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:29.899 "is_configured": true, 00:15:29.899 "data_offset": 0, 00:15:29.899 "data_size": 65536 00:15:29.899 }, 00:15:29.899 { 00:15:29.899 "name": "BaseBdev4", 00:15:29.899 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:29.899 "is_configured": true, 00:15:29.899 "data_offset": 0, 00:15:29.899 "data_size": 65536 00:15:29.899 } 00:15:29.899 ] 00:15:29.899 }' 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.899 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.158 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.158 [2024-11-20 03:22:19.774720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.418 "name": "Existed_Raid", 00:15:30.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.418 "strip_size_kb": 64, 00:15:30.418 "state": "configuring", 00:15:30.418 "raid_level": "raid5f", 00:15:30.418 "superblock": false, 00:15:30.418 "num_base_bdevs": 4, 00:15:30.418 "num_base_bdevs_discovered": 2, 00:15:30.418 "num_base_bdevs_operational": 4, 00:15:30.418 "base_bdevs_list": [ 00:15:30.418 { 00:15:30.418 "name": null, 00:15:30.418 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:30.418 "is_configured": false, 00:15:30.418 "data_offset": 0, 00:15:30.418 "data_size": 65536 00:15:30.418 }, 00:15:30.418 { 00:15:30.418 "name": null, 00:15:30.418 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:30.418 "is_configured": false, 00:15:30.418 "data_offset": 0, 00:15:30.418 "data_size": 65536 00:15:30.418 }, 00:15:30.418 { 00:15:30.418 "name": "BaseBdev3", 00:15:30.418 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:30.418 "is_configured": true, 00:15:30.418 "data_offset": 0, 00:15:30.418 "data_size": 65536 00:15:30.418 }, 00:15:30.418 { 00:15:30.418 "name": "BaseBdev4", 00:15:30.418 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:30.418 "is_configured": true, 00:15:30.418 "data_offset": 0, 00:15:30.418 "data_size": 65536 00:15:30.418 } 00:15:30.418 ] 00:15:30.418 }' 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.418 03:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.987 [2024-11-20 03:22:20.393099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.987 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.988 "name": "Existed_Raid", 00:15:30.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.988 "strip_size_kb": 64, 00:15:30.988 "state": "configuring", 00:15:30.988 "raid_level": "raid5f", 00:15:30.988 "superblock": false, 00:15:30.988 "num_base_bdevs": 4, 00:15:30.988 "num_base_bdevs_discovered": 3, 00:15:30.988 "num_base_bdevs_operational": 4, 00:15:30.988 "base_bdevs_list": [ 00:15:30.988 { 00:15:30.988 "name": null, 00:15:30.988 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:30.988 "is_configured": false, 00:15:30.988 "data_offset": 0, 00:15:30.988 "data_size": 65536 00:15:30.988 }, 00:15:30.988 { 00:15:30.988 "name": "BaseBdev2", 00:15:30.988 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:30.988 "is_configured": true, 00:15:30.988 "data_offset": 0, 00:15:30.988 "data_size": 65536 00:15:30.988 }, 00:15:30.988 { 00:15:30.988 "name": "BaseBdev3", 00:15:30.988 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:30.988 "is_configured": true, 00:15:30.988 "data_offset": 0, 00:15:30.988 "data_size": 65536 00:15:30.988 }, 00:15:30.988 { 00:15:30.988 "name": "BaseBdev4", 00:15:30.988 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:30.988 "is_configured": true, 00:15:30.988 "data_offset": 0, 00:15:30.988 "data_size": 65536 00:15:30.988 } 00:15:30.988 ] 00:15:30.988 }' 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.988 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.247 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.247 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.247 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.247 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.247 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.247 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d1a276bb-5447-4a0f-9259-dc03510328a4 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.508 [2024-11-20 03:22:20.964854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:31.508 [2024-11-20 03:22:20.964907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:31.508 [2024-11-20 03:22:20.964914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:31.508 [2024-11-20 03:22:20.965158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:31.508 [2024-11-20 03:22:20.972631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:31.508 [2024-11-20 03:22:20.972697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:31.508 [2024-11-20 03:22:20.972982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.508 NewBaseBdev 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.508 03:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.508 [ 00:15:31.508 { 00:15:31.508 "name": "NewBaseBdev", 00:15:31.508 "aliases": [ 00:15:31.508 "d1a276bb-5447-4a0f-9259-dc03510328a4" 00:15:31.508 ], 00:15:31.508 "product_name": "Malloc disk", 00:15:31.508 "block_size": 512, 00:15:31.508 "num_blocks": 65536, 00:15:31.508 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:31.508 "assigned_rate_limits": { 00:15:31.508 "rw_ios_per_sec": 0, 00:15:31.508 "rw_mbytes_per_sec": 0, 00:15:31.508 "r_mbytes_per_sec": 0, 00:15:31.508 "w_mbytes_per_sec": 0 00:15:31.508 }, 00:15:31.508 "claimed": true, 00:15:31.508 "claim_type": "exclusive_write", 00:15:31.508 "zoned": false, 00:15:31.508 "supported_io_types": { 00:15:31.508 "read": true, 00:15:31.508 "write": true, 00:15:31.508 "unmap": true, 00:15:31.508 "flush": true, 00:15:31.508 "reset": true, 00:15:31.508 "nvme_admin": false, 00:15:31.508 "nvme_io": false, 00:15:31.508 "nvme_io_md": false, 00:15:31.508 "write_zeroes": true, 00:15:31.508 "zcopy": true, 00:15:31.508 "get_zone_info": false, 00:15:31.508 "zone_management": false, 00:15:31.508 "zone_append": false, 00:15:31.508 "compare": false, 00:15:31.508 "compare_and_write": false, 00:15:31.508 "abort": true, 00:15:31.508 "seek_hole": false, 00:15:31.508 "seek_data": false, 00:15:31.508 "copy": true, 00:15:31.508 "nvme_iov_md": false 00:15:31.508 }, 00:15:31.508 "memory_domains": [ 00:15:31.508 { 00:15:31.508 "dma_device_id": "system", 00:15:31.508 "dma_device_type": 1 00:15:31.508 }, 00:15:31.508 { 00:15:31.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.508 "dma_device_type": 2 00:15:31.508 } 00:15:31.508 ], 00:15:31.508 "driver_specific": {} 00:15:31.508 } 00:15:31.508 ] 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.508 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.508 "name": "Existed_Raid", 00:15:31.508 "uuid": "5e1cf84a-703d-45b4-9b94-e3c544bc433d", 00:15:31.508 "strip_size_kb": 64, 00:15:31.508 "state": "online", 00:15:31.508 "raid_level": "raid5f", 00:15:31.508 "superblock": false, 00:15:31.508 "num_base_bdevs": 4, 00:15:31.508 "num_base_bdevs_discovered": 4, 00:15:31.508 "num_base_bdevs_operational": 4, 00:15:31.508 "base_bdevs_list": [ 00:15:31.508 { 00:15:31.508 "name": "NewBaseBdev", 00:15:31.508 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:31.508 "is_configured": true, 00:15:31.508 "data_offset": 0, 00:15:31.508 "data_size": 65536 00:15:31.508 }, 00:15:31.508 { 00:15:31.508 "name": "BaseBdev2", 00:15:31.508 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:31.508 "is_configured": true, 00:15:31.508 "data_offset": 0, 00:15:31.508 "data_size": 65536 00:15:31.508 }, 00:15:31.508 { 00:15:31.508 "name": "BaseBdev3", 00:15:31.508 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:31.508 "is_configured": true, 00:15:31.508 "data_offset": 0, 00:15:31.508 "data_size": 65536 00:15:31.508 }, 00:15:31.508 { 00:15:31.508 "name": "BaseBdev4", 00:15:31.508 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:31.508 "is_configured": true, 00:15:31.508 "data_offset": 0, 00:15:31.508 "data_size": 65536 00:15:31.508 } 00:15:31.509 ] 00:15:31.509 }' 00:15:31.509 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.509 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.078 [2024-11-20 03:22:21.488695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.078 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.078 "name": "Existed_Raid", 00:15:32.078 "aliases": [ 00:15:32.078 "5e1cf84a-703d-45b4-9b94-e3c544bc433d" 00:15:32.078 ], 00:15:32.078 "product_name": "Raid Volume", 00:15:32.078 "block_size": 512, 00:15:32.078 "num_blocks": 196608, 00:15:32.078 "uuid": "5e1cf84a-703d-45b4-9b94-e3c544bc433d", 00:15:32.078 "assigned_rate_limits": { 00:15:32.078 "rw_ios_per_sec": 0, 00:15:32.078 "rw_mbytes_per_sec": 0, 00:15:32.078 "r_mbytes_per_sec": 0, 00:15:32.078 "w_mbytes_per_sec": 0 00:15:32.078 }, 00:15:32.078 "claimed": false, 00:15:32.078 "zoned": false, 00:15:32.078 "supported_io_types": { 00:15:32.078 "read": true, 00:15:32.078 "write": true, 00:15:32.078 "unmap": false, 00:15:32.078 "flush": false, 00:15:32.078 "reset": true, 00:15:32.078 "nvme_admin": false, 00:15:32.078 "nvme_io": false, 00:15:32.078 "nvme_io_md": false, 00:15:32.078 "write_zeroes": true, 00:15:32.078 "zcopy": false, 00:15:32.078 "get_zone_info": false, 00:15:32.078 "zone_management": false, 00:15:32.078 "zone_append": false, 00:15:32.078 "compare": false, 00:15:32.078 "compare_and_write": false, 00:15:32.078 "abort": false, 00:15:32.078 "seek_hole": false, 00:15:32.078 "seek_data": false, 00:15:32.078 "copy": false, 00:15:32.078 "nvme_iov_md": false 00:15:32.078 }, 00:15:32.078 "driver_specific": { 00:15:32.078 "raid": { 00:15:32.078 "uuid": "5e1cf84a-703d-45b4-9b94-e3c544bc433d", 00:15:32.078 "strip_size_kb": 64, 00:15:32.078 "state": "online", 00:15:32.078 "raid_level": "raid5f", 00:15:32.078 "superblock": false, 00:15:32.078 "num_base_bdevs": 4, 00:15:32.078 "num_base_bdevs_discovered": 4, 00:15:32.078 "num_base_bdevs_operational": 4, 00:15:32.078 "base_bdevs_list": [ 00:15:32.078 { 00:15:32.078 "name": "NewBaseBdev", 00:15:32.078 "uuid": "d1a276bb-5447-4a0f-9259-dc03510328a4", 00:15:32.078 "is_configured": true, 00:15:32.078 "data_offset": 0, 00:15:32.078 "data_size": 65536 00:15:32.078 }, 00:15:32.078 { 00:15:32.078 "name": "BaseBdev2", 00:15:32.078 "uuid": "c90a459d-b23c-48a7-aeb5-d50c151e550b", 00:15:32.078 "is_configured": true, 00:15:32.079 "data_offset": 0, 00:15:32.079 "data_size": 65536 00:15:32.079 }, 00:15:32.079 { 00:15:32.079 "name": "BaseBdev3", 00:15:32.079 "uuid": "5b97f244-bca8-49cf-ab56-49c27eb05964", 00:15:32.079 "is_configured": true, 00:15:32.079 "data_offset": 0, 00:15:32.079 "data_size": 65536 00:15:32.079 }, 00:15:32.079 { 00:15:32.079 "name": "BaseBdev4", 00:15:32.079 "uuid": "56ade881-0fc1-426e-90c9-43e1bbe9e6cf", 00:15:32.079 "is_configured": true, 00:15:32.079 "data_offset": 0, 00:15:32.079 "data_size": 65536 00:15:32.079 } 00:15:32.079 ] 00:15:32.079 } 00:15:32.079 } 00:15:32.079 }' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:32.079 BaseBdev2 00:15:32.079 BaseBdev3 00:15:32.079 BaseBdev4' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.079 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.338 [2024-11-20 03:22:21.839799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.338 [2024-11-20 03:22:21.839871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.338 [2024-11-20 03:22:21.839964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.338 [2024-11-20 03:22:21.840277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.338 [2024-11-20 03:22:21.840341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82572 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82572 ']' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82572 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82572 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82572' 00:15:32.338 killing process with pid 82572 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82572 00:15:32.338 [2024-11-20 03:22:21.886864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.338 03:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82572 00:15:32.908 [2024-11-20 03:22:22.278228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:33.846 00:15:33.846 real 0m11.516s 00:15:33.846 user 0m18.325s 00:15:33.846 sys 0m2.085s 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.846 ************************************ 00:15:33.846 END TEST raid5f_state_function_test 00:15:33.846 ************************************ 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.846 03:22:23 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:33.846 03:22:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:33.846 03:22:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.846 03:22:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.846 ************************************ 00:15:33.846 START TEST raid5f_state_function_test_sb 00:15:33.846 ************************************ 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:33.846 Process raid pid: 83242 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83242 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83242' 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83242 00:15:33.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83242 ']' 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.846 03:22:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.105 [2024-11-20 03:22:23.524991] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:34.105 [2024-11-20 03:22:23.525106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.105 [2024-11-20 03:22:23.700682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.364 [2024-11-20 03:22:23.815148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.623 [2024-11-20 03:22:24.024467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.623 [2024-11-20 03:22:24.024521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.883 [2024-11-20 03:22:24.353765] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.883 [2024-11-20 03:22:24.353901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.883 [2024-11-20 03:22:24.353916] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.883 [2024-11-20 03:22:24.353926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.883 [2024-11-20 03:22:24.353933] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.883 [2024-11-20 03:22:24.353942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.883 [2024-11-20 03:22:24.353948] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.883 [2024-11-20 03:22:24.353956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.883 "name": "Existed_Raid", 00:15:34.883 "uuid": "a73ed789-8cd6-443d-a32c-382603e9c92b", 00:15:34.883 "strip_size_kb": 64, 00:15:34.883 "state": "configuring", 00:15:34.883 "raid_level": "raid5f", 00:15:34.883 "superblock": true, 00:15:34.883 "num_base_bdevs": 4, 00:15:34.883 "num_base_bdevs_discovered": 0, 00:15:34.883 "num_base_bdevs_operational": 4, 00:15:34.883 "base_bdevs_list": [ 00:15:34.883 { 00:15:34.883 "name": "BaseBdev1", 00:15:34.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.883 "is_configured": false, 00:15:34.883 "data_offset": 0, 00:15:34.883 "data_size": 0 00:15:34.883 }, 00:15:34.883 { 00:15:34.883 "name": "BaseBdev2", 00:15:34.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.883 "is_configured": false, 00:15:34.883 "data_offset": 0, 00:15:34.883 "data_size": 0 00:15:34.883 }, 00:15:34.883 { 00:15:34.883 "name": "BaseBdev3", 00:15:34.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.883 "is_configured": false, 00:15:34.883 "data_offset": 0, 00:15:34.883 "data_size": 0 00:15:34.883 }, 00:15:34.883 { 00:15:34.883 "name": "BaseBdev4", 00:15:34.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.883 "is_configured": false, 00:15:34.883 "data_offset": 0, 00:15:34.883 "data_size": 0 00:15:34.883 } 00:15:34.883 ] 00:15:34.883 }' 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.883 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.453 [2024-11-20 03:22:24.832853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.453 [2024-11-20 03:22:24.832893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.453 [2024-11-20 03:22:24.840836] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.453 [2024-11-20 03:22:24.840877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.453 [2024-11-20 03:22:24.840887] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.453 [2024-11-20 03:22:24.840912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.453 [2024-11-20 03:22:24.840918] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.453 [2024-11-20 03:22:24.840927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.453 [2024-11-20 03:22:24.840933] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:35.453 [2024-11-20 03:22:24.840942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.453 [2024-11-20 03:22:24.885059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.453 BaseBdev1 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.453 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.453 [ 00:15:35.453 { 00:15:35.453 "name": "BaseBdev1", 00:15:35.453 "aliases": [ 00:15:35.453 "57819f17-c656-4a88-bded-f94df5a264bb" 00:15:35.453 ], 00:15:35.453 "product_name": "Malloc disk", 00:15:35.453 "block_size": 512, 00:15:35.453 "num_blocks": 65536, 00:15:35.453 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:35.453 "assigned_rate_limits": { 00:15:35.453 "rw_ios_per_sec": 0, 00:15:35.453 "rw_mbytes_per_sec": 0, 00:15:35.453 "r_mbytes_per_sec": 0, 00:15:35.453 "w_mbytes_per_sec": 0 00:15:35.453 }, 00:15:35.453 "claimed": true, 00:15:35.453 "claim_type": "exclusive_write", 00:15:35.453 "zoned": false, 00:15:35.453 "supported_io_types": { 00:15:35.453 "read": true, 00:15:35.453 "write": true, 00:15:35.453 "unmap": true, 00:15:35.453 "flush": true, 00:15:35.453 "reset": true, 00:15:35.453 "nvme_admin": false, 00:15:35.453 "nvme_io": false, 00:15:35.453 "nvme_io_md": false, 00:15:35.453 "write_zeroes": true, 00:15:35.453 "zcopy": true, 00:15:35.453 "get_zone_info": false, 00:15:35.453 "zone_management": false, 00:15:35.453 "zone_append": false, 00:15:35.453 "compare": false, 00:15:35.453 "compare_and_write": false, 00:15:35.453 "abort": true, 00:15:35.453 "seek_hole": false, 00:15:35.453 "seek_data": false, 00:15:35.453 "copy": true, 00:15:35.453 "nvme_iov_md": false 00:15:35.453 }, 00:15:35.453 "memory_domains": [ 00:15:35.453 { 00:15:35.453 "dma_device_id": "system", 00:15:35.453 "dma_device_type": 1 00:15:35.453 }, 00:15:35.453 { 00:15:35.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.453 "dma_device_type": 2 00:15:35.453 } 00:15:35.453 ], 00:15:35.453 "driver_specific": {} 00:15:35.453 } 00:15:35.454 ] 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.454 "name": "Existed_Raid", 00:15:35.454 "uuid": "43c14b4d-6442-43d4-b707-b53d15f2e8e0", 00:15:35.454 "strip_size_kb": 64, 00:15:35.454 "state": "configuring", 00:15:35.454 "raid_level": "raid5f", 00:15:35.454 "superblock": true, 00:15:35.454 "num_base_bdevs": 4, 00:15:35.454 "num_base_bdevs_discovered": 1, 00:15:35.454 "num_base_bdevs_operational": 4, 00:15:35.454 "base_bdevs_list": [ 00:15:35.454 { 00:15:35.454 "name": "BaseBdev1", 00:15:35.454 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:35.454 "is_configured": true, 00:15:35.454 "data_offset": 2048, 00:15:35.454 "data_size": 63488 00:15:35.454 }, 00:15:35.454 { 00:15:35.454 "name": "BaseBdev2", 00:15:35.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.454 "is_configured": false, 00:15:35.454 "data_offset": 0, 00:15:35.454 "data_size": 0 00:15:35.454 }, 00:15:35.454 { 00:15:35.454 "name": "BaseBdev3", 00:15:35.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.454 "is_configured": false, 00:15:35.454 "data_offset": 0, 00:15:35.454 "data_size": 0 00:15:35.454 }, 00:15:35.454 { 00:15:35.454 "name": "BaseBdev4", 00:15:35.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.454 "is_configured": false, 00:15:35.454 "data_offset": 0, 00:15:35.454 "data_size": 0 00:15:35.454 } 00:15:35.454 ] 00:15:35.454 }' 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.454 03:22:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.714 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.714 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.714 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.714 [2024-11-20 03:22:25.324381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.714 [2024-11-20 03:22:25.324501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:35.714 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.715 [2024-11-20 03:22:25.336410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.715 [2024-11-20 03:22:25.338301] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.715 [2024-11-20 03:22:25.338384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.715 [2024-11-20 03:22:25.338398] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.715 [2024-11-20 03:22:25.338416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.715 [2024-11-20 03:22:25.338423] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:35.715 [2024-11-20 03:22:25.338431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.715 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.975 "name": "Existed_Raid", 00:15:35.975 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:35.975 "strip_size_kb": 64, 00:15:35.975 "state": "configuring", 00:15:35.975 "raid_level": "raid5f", 00:15:35.975 "superblock": true, 00:15:35.975 "num_base_bdevs": 4, 00:15:35.975 "num_base_bdevs_discovered": 1, 00:15:35.975 "num_base_bdevs_operational": 4, 00:15:35.975 "base_bdevs_list": [ 00:15:35.975 { 00:15:35.975 "name": "BaseBdev1", 00:15:35.975 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:35.975 "is_configured": true, 00:15:35.975 "data_offset": 2048, 00:15:35.975 "data_size": 63488 00:15:35.975 }, 00:15:35.975 { 00:15:35.975 "name": "BaseBdev2", 00:15:35.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.975 "is_configured": false, 00:15:35.975 "data_offset": 0, 00:15:35.975 "data_size": 0 00:15:35.975 }, 00:15:35.975 { 00:15:35.975 "name": "BaseBdev3", 00:15:35.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.975 "is_configured": false, 00:15:35.975 "data_offset": 0, 00:15:35.975 "data_size": 0 00:15:35.975 }, 00:15:35.975 { 00:15:35.975 "name": "BaseBdev4", 00:15:35.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.975 "is_configured": false, 00:15:35.975 "data_offset": 0, 00:15:35.975 "data_size": 0 00:15:35.975 } 00:15:35.975 ] 00:15:35.975 }' 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.975 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.234 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.234 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.234 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.234 [2024-11-20 03:22:25.859657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.234 BaseBdev2 00:15:36.234 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.234 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:36.234 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.494 [ 00:15:36.494 { 00:15:36.494 "name": "BaseBdev2", 00:15:36.494 "aliases": [ 00:15:36.494 "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8" 00:15:36.494 ], 00:15:36.494 "product_name": "Malloc disk", 00:15:36.494 "block_size": 512, 00:15:36.494 "num_blocks": 65536, 00:15:36.494 "uuid": "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8", 00:15:36.494 "assigned_rate_limits": { 00:15:36.494 "rw_ios_per_sec": 0, 00:15:36.494 "rw_mbytes_per_sec": 0, 00:15:36.494 "r_mbytes_per_sec": 0, 00:15:36.494 "w_mbytes_per_sec": 0 00:15:36.494 }, 00:15:36.494 "claimed": true, 00:15:36.494 "claim_type": "exclusive_write", 00:15:36.494 "zoned": false, 00:15:36.494 "supported_io_types": { 00:15:36.494 "read": true, 00:15:36.494 "write": true, 00:15:36.494 "unmap": true, 00:15:36.494 "flush": true, 00:15:36.494 "reset": true, 00:15:36.494 "nvme_admin": false, 00:15:36.494 "nvme_io": false, 00:15:36.494 "nvme_io_md": false, 00:15:36.494 "write_zeroes": true, 00:15:36.494 "zcopy": true, 00:15:36.494 "get_zone_info": false, 00:15:36.494 "zone_management": false, 00:15:36.494 "zone_append": false, 00:15:36.494 "compare": false, 00:15:36.494 "compare_and_write": false, 00:15:36.494 "abort": true, 00:15:36.494 "seek_hole": false, 00:15:36.494 "seek_data": false, 00:15:36.494 "copy": true, 00:15:36.494 "nvme_iov_md": false 00:15:36.494 }, 00:15:36.494 "memory_domains": [ 00:15:36.494 { 00:15:36.494 "dma_device_id": "system", 00:15:36.494 "dma_device_type": 1 00:15:36.494 }, 00:15:36.494 { 00:15:36.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.494 "dma_device_type": 2 00:15:36.494 } 00:15:36.494 ], 00:15:36.494 "driver_specific": {} 00:15:36.494 } 00:15:36.494 ] 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.494 "name": "Existed_Raid", 00:15:36.494 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:36.494 "strip_size_kb": 64, 00:15:36.494 "state": "configuring", 00:15:36.494 "raid_level": "raid5f", 00:15:36.494 "superblock": true, 00:15:36.494 "num_base_bdevs": 4, 00:15:36.494 "num_base_bdevs_discovered": 2, 00:15:36.494 "num_base_bdevs_operational": 4, 00:15:36.494 "base_bdevs_list": [ 00:15:36.494 { 00:15:36.494 "name": "BaseBdev1", 00:15:36.494 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:36.494 "is_configured": true, 00:15:36.494 "data_offset": 2048, 00:15:36.494 "data_size": 63488 00:15:36.494 }, 00:15:36.494 { 00:15:36.494 "name": "BaseBdev2", 00:15:36.494 "uuid": "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8", 00:15:36.494 "is_configured": true, 00:15:36.494 "data_offset": 2048, 00:15:36.494 "data_size": 63488 00:15:36.494 }, 00:15:36.494 { 00:15:36.494 "name": "BaseBdev3", 00:15:36.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.494 "is_configured": false, 00:15:36.494 "data_offset": 0, 00:15:36.494 "data_size": 0 00:15:36.494 }, 00:15:36.494 { 00:15:36.494 "name": "BaseBdev4", 00:15:36.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.494 "is_configured": false, 00:15:36.494 "data_offset": 0, 00:15:36.494 "data_size": 0 00:15:36.494 } 00:15:36.494 ] 00:15:36.494 }' 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.494 03:22:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.754 [2024-11-20 03:22:26.327357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.754 BaseBdev3 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.754 [ 00:15:36.754 { 00:15:36.754 "name": "BaseBdev3", 00:15:36.754 "aliases": [ 00:15:36.754 "c60eb836-2cc9-4b89-b04d-2b07a7a476cc" 00:15:36.754 ], 00:15:36.754 "product_name": "Malloc disk", 00:15:36.754 "block_size": 512, 00:15:36.754 "num_blocks": 65536, 00:15:36.754 "uuid": "c60eb836-2cc9-4b89-b04d-2b07a7a476cc", 00:15:36.754 "assigned_rate_limits": { 00:15:36.754 "rw_ios_per_sec": 0, 00:15:36.754 "rw_mbytes_per_sec": 0, 00:15:36.754 "r_mbytes_per_sec": 0, 00:15:36.754 "w_mbytes_per_sec": 0 00:15:36.754 }, 00:15:36.754 "claimed": true, 00:15:36.754 "claim_type": "exclusive_write", 00:15:36.754 "zoned": false, 00:15:36.754 "supported_io_types": { 00:15:36.754 "read": true, 00:15:36.754 "write": true, 00:15:36.754 "unmap": true, 00:15:36.754 "flush": true, 00:15:36.754 "reset": true, 00:15:36.754 "nvme_admin": false, 00:15:36.754 "nvme_io": false, 00:15:36.754 "nvme_io_md": false, 00:15:36.754 "write_zeroes": true, 00:15:36.754 "zcopy": true, 00:15:36.754 "get_zone_info": false, 00:15:36.754 "zone_management": false, 00:15:36.754 "zone_append": false, 00:15:36.754 "compare": false, 00:15:36.754 "compare_and_write": false, 00:15:36.754 "abort": true, 00:15:36.754 "seek_hole": false, 00:15:36.754 "seek_data": false, 00:15:36.754 "copy": true, 00:15:36.754 "nvme_iov_md": false 00:15:36.754 }, 00:15:36.754 "memory_domains": [ 00:15:36.754 { 00:15:36.754 "dma_device_id": "system", 00:15:36.754 "dma_device_type": 1 00:15:36.754 }, 00:15:36.754 { 00:15:36.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.754 "dma_device_type": 2 00:15:36.754 } 00:15:36.754 ], 00:15:36.754 "driver_specific": {} 00:15:36.754 } 00:15:36.754 ] 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.754 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.755 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.014 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.014 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.014 "name": "Existed_Raid", 00:15:37.014 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:37.014 "strip_size_kb": 64, 00:15:37.014 "state": "configuring", 00:15:37.014 "raid_level": "raid5f", 00:15:37.014 "superblock": true, 00:15:37.014 "num_base_bdevs": 4, 00:15:37.014 "num_base_bdevs_discovered": 3, 00:15:37.014 "num_base_bdevs_operational": 4, 00:15:37.014 "base_bdevs_list": [ 00:15:37.014 { 00:15:37.015 "name": "BaseBdev1", 00:15:37.015 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:37.015 "is_configured": true, 00:15:37.015 "data_offset": 2048, 00:15:37.015 "data_size": 63488 00:15:37.015 }, 00:15:37.015 { 00:15:37.015 "name": "BaseBdev2", 00:15:37.015 "uuid": "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8", 00:15:37.015 "is_configured": true, 00:15:37.015 "data_offset": 2048, 00:15:37.015 "data_size": 63488 00:15:37.015 }, 00:15:37.015 { 00:15:37.015 "name": "BaseBdev3", 00:15:37.015 "uuid": "c60eb836-2cc9-4b89-b04d-2b07a7a476cc", 00:15:37.015 "is_configured": true, 00:15:37.015 "data_offset": 2048, 00:15:37.015 "data_size": 63488 00:15:37.015 }, 00:15:37.015 { 00:15:37.015 "name": "BaseBdev4", 00:15:37.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.015 "is_configured": false, 00:15:37.015 "data_offset": 0, 00:15:37.015 "data_size": 0 00:15:37.015 } 00:15:37.015 ] 00:15:37.015 }' 00:15:37.015 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.015 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.275 [2024-11-20 03:22:26.864104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.275 [2024-11-20 03:22:26.864450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:37.275 [2024-11-20 03:22:26.864502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:37.275 [2024-11-20 03:22:26.864785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:37.275 BaseBdev4 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.275 [2024-11-20 03:22:26.872429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:37.275 [2024-11-20 03:22:26.872489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:37.275 [2024-11-20 03:22:26.872776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.275 [ 00:15:37.275 { 00:15:37.275 "name": "BaseBdev4", 00:15:37.275 "aliases": [ 00:15:37.275 "4950d7fa-83db-4e7a-b5ac-672c2562a7a6" 00:15:37.275 ], 00:15:37.275 "product_name": "Malloc disk", 00:15:37.275 "block_size": 512, 00:15:37.275 "num_blocks": 65536, 00:15:37.275 "uuid": "4950d7fa-83db-4e7a-b5ac-672c2562a7a6", 00:15:37.275 "assigned_rate_limits": { 00:15:37.275 "rw_ios_per_sec": 0, 00:15:37.275 "rw_mbytes_per_sec": 0, 00:15:37.275 "r_mbytes_per_sec": 0, 00:15:37.275 "w_mbytes_per_sec": 0 00:15:37.275 }, 00:15:37.275 "claimed": true, 00:15:37.275 "claim_type": "exclusive_write", 00:15:37.275 "zoned": false, 00:15:37.275 "supported_io_types": { 00:15:37.275 "read": true, 00:15:37.275 "write": true, 00:15:37.275 "unmap": true, 00:15:37.275 "flush": true, 00:15:37.275 "reset": true, 00:15:37.275 "nvme_admin": false, 00:15:37.275 "nvme_io": false, 00:15:37.275 "nvme_io_md": false, 00:15:37.275 "write_zeroes": true, 00:15:37.275 "zcopy": true, 00:15:37.275 "get_zone_info": false, 00:15:37.275 "zone_management": false, 00:15:37.275 "zone_append": false, 00:15:37.275 "compare": false, 00:15:37.275 "compare_and_write": false, 00:15:37.275 "abort": true, 00:15:37.275 "seek_hole": false, 00:15:37.275 "seek_data": false, 00:15:37.275 "copy": true, 00:15:37.275 "nvme_iov_md": false 00:15:37.275 }, 00:15:37.275 "memory_domains": [ 00:15:37.275 { 00:15:37.275 "dma_device_id": "system", 00:15:37.275 "dma_device_type": 1 00:15:37.275 }, 00:15:37.275 { 00:15:37.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.275 "dma_device_type": 2 00:15:37.275 } 00:15:37.275 ], 00:15:37.275 "driver_specific": {} 00:15:37.275 } 00:15:37.275 ] 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.275 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.535 "name": "Existed_Raid", 00:15:37.535 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:37.535 "strip_size_kb": 64, 00:15:37.535 "state": "online", 00:15:37.535 "raid_level": "raid5f", 00:15:37.535 "superblock": true, 00:15:37.535 "num_base_bdevs": 4, 00:15:37.535 "num_base_bdevs_discovered": 4, 00:15:37.535 "num_base_bdevs_operational": 4, 00:15:37.535 "base_bdevs_list": [ 00:15:37.535 { 00:15:37.535 "name": "BaseBdev1", 00:15:37.535 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:37.535 "is_configured": true, 00:15:37.535 "data_offset": 2048, 00:15:37.535 "data_size": 63488 00:15:37.535 }, 00:15:37.535 { 00:15:37.535 "name": "BaseBdev2", 00:15:37.535 "uuid": "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8", 00:15:37.535 "is_configured": true, 00:15:37.535 "data_offset": 2048, 00:15:37.535 "data_size": 63488 00:15:37.535 }, 00:15:37.535 { 00:15:37.535 "name": "BaseBdev3", 00:15:37.535 "uuid": "c60eb836-2cc9-4b89-b04d-2b07a7a476cc", 00:15:37.535 "is_configured": true, 00:15:37.535 "data_offset": 2048, 00:15:37.535 "data_size": 63488 00:15:37.535 }, 00:15:37.535 { 00:15:37.535 "name": "BaseBdev4", 00:15:37.535 "uuid": "4950d7fa-83db-4e7a-b5ac-672c2562a7a6", 00:15:37.535 "is_configured": true, 00:15:37.535 "data_offset": 2048, 00:15:37.535 "data_size": 63488 00:15:37.535 } 00:15:37.535 ] 00:15:37.535 }' 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.535 03:22:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.795 [2024-11-20 03:22:27.308488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.795 "name": "Existed_Raid", 00:15:37.795 "aliases": [ 00:15:37.795 "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5" 00:15:37.795 ], 00:15:37.795 "product_name": "Raid Volume", 00:15:37.795 "block_size": 512, 00:15:37.795 "num_blocks": 190464, 00:15:37.795 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:37.795 "assigned_rate_limits": { 00:15:37.795 "rw_ios_per_sec": 0, 00:15:37.795 "rw_mbytes_per_sec": 0, 00:15:37.795 "r_mbytes_per_sec": 0, 00:15:37.795 "w_mbytes_per_sec": 0 00:15:37.795 }, 00:15:37.795 "claimed": false, 00:15:37.795 "zoned": false, 00:15:37.795 "supported_io_types": { 00:15:37.795 "read": true, 00:15:37.795 "write": true, 00:15:37.795 "unmap": false, 00:15:37.795 "flush": false, 00:15:37.795 "reset": true, 00:15:37.795 "nvme_admin": false, 00:15:37.795 "nvme_io": false, 00:15:37.795 "nvme_io_md": false, 00:15:37.795 "write_zeroes": true, 00:15:37.795 "zcopy": false, 00:15:37.795 "get_zone_info": false, 00:15:37.795 "zone_management": false, 00:15:37.795 "zone_append": false, 00:15:37.795 "compare": false, 00:15:37.795 "compare_and_write": false, 00:15:37.795 "abort": false, 00:15:37.795 "seek_hole": false, 00:15:37.795 "seek_data": false, 00:15:37.795 "copy": false, 00:15:37.795 "nvme_iov_md": false 00:15:37.795 }, 00:15:37.795 "driver_specific": { 00:15:37.795 "raid": { 00:15:37.795 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:37.795 "strip_size_kb": 64, 00:15:37.795 "state": "online", 00:15:37.795 "raid_level": "raid5f", 00:15:37.795 "superblock": true, 00:15:37.795 "num_base_bdevs": 4, 00:15:37.795 "num_base_bdevs_discovered": 4, 00:15:37.795 "num_base_bdevs_operational": 4, 00:15:37.795 "base_bdevs_list": [ 00:15:37.795 { 00:15:37.795 "name": "BaseBdev1", 00:15:37.795 "uuid": "57819f17-c656-4a88-bded-f94df5a264bb", 00:15:37.795 "is_configured": true, 00:15:37.795 "data_offset": 2048, 00:15:37.795 "data_size": 63488 00:15:37.795 }, 00:15:37.795 { 00:15:37.795 "name": "BaseBdev2", 00:15:37.795 "uuid": "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8", 00:15:37.795 "is_configured": true, 00:15:37.795 "data_offset": 2048, 00:15:37.795 "data_size": 63488 00:15:37.795 }, 00:15:37.795 { 00:15:37.795 "name": "BaseBdev3", 00:15:37.795 "uuid": "c60eb836-2cc9-4b89-b04d-2b07a7a476cc", 00:15:37.795 "is_configured": true, 00:15:37.795 "data_offset": 2048, 00:15:37.795 "data_size": 63488 00:15:37.795 }, 00:15:37.795 { 00:15:37.795 "name": "BaseBdev4", 00:15:37.795 "uuid": "4950d7fa-83db-4e7a-b5ac-672c2562a7a6", 00:15:37.795 "is_configured": true, 00:15:37.795 "data_offset": 2048, 00:15:37.795 "data_size": 63488 00:15:37.795 } 00:15:37.795 ] 00:15:37.795 } 00:15:37.795 } 00:15:37.795 }' 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:37.795 BaseBdev2 00:15:37.795 BaseBdev3 00:15:37.795 BaseBdev4' 00:15:37.795 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.055 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.055 [2024-11-20 03:22:27.639755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.314 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.314 "name": "Existed_Raid", 00:15:38.314 "uuid": "5c925ac5-d5b0-418e-8c1b-fa5e0f6303b5", 00:15:38.314 "strip_size_kb": 64, 00:15:38.314 "state": "online", 00:15:38.314 "raid_level": "raid5f", 00:15:38.314 "superblock": true, 00:15:38.314 "num_base_bdevs": 4, 00:15:38.314 "num_base_bdevs_discovered": 3, 00:15:38.314 "num_base_bdevs_operational": 3, 00:15:38.315 "base_bdevs_list": [ 00:15:38.315 { 00:15:38.315 "name": null, 00:15:38.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.315 "is_configured": false, 00:15:38.315 "data_offset": 0, 00:15:38.315 "data_size": 63488 00:15:38.315 }, 00:15:38.315 { 00:15:38.315 "name": "BaseBdev2", 00:15:38.315 "uuid": "6a5b00bf-a32f-4b31-acd5-c7d8a25891f8", 00:15:38.315 "is_configured": true, 00:15:38.315 "data_offset": 2048, 00:15:38.315 "data_size": 63488 00:15:38.315 }, 00:15:38.315 { 00:15:38.315 "name": "BaseBdev3", 00:15:38.315 "uuid": "c60eb836-2cc9-4b89-b04d-2b07a7a476cc", 00:15:38.315 "is_configured": true, 00:15:38.315 "data_offset": 2048, 00:15:38.315 "data_size": 63488 00:15:38.315 }, 00:15:38.315 { 00:15:38.315 "name": "BaseBdev4", 00:15:38.315 "uuid": "4950d7fa-83db-4e7a-b5ac-672c2562a7a6", 00:15:38.315 "is_configured": true, 00:15:38.315 "data_offset": 2048, 00:15:38.315 "data_size": 63488 00:15:38.315 } 00:15:38.315 ] 00:15:38.315 }' 00:15:38.315 03:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.315 03:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.574 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 [2024-11-20 03:22:28.225228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.834 [2024-11-20 03:22:28.225385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.834 [2024-11-20 03:22:28.319761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 [2024-11-20 03:22:28.375731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.094 [2024-11-20 03:22:28.530266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:39.094 [2024-11-20 03:22:28.530364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.094 BaseBdev2 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.094 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.354 [ 00:15:39.354 { 00:15:39.354 "name": "BaseBdev2", 00:15:39.354 "aliases": [ 00:15:39.354 "3604d568-e928-4803-b84e-40e07bebaa69" 00:15:39.354 ], 00:15:39.354 "product_name": "Malloc disk", 00:15:39.354 "block_size": 512, 00:15:39.354 "num_blocks": 65536, 00:15:39.354 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:39.354 "assigned_rate_limits": { 00:15:39.354 "rw_ios_per_sec": 0, 00:15:39.354 "rw_mbytes_per_sec": 0, 00:15:39.354 "r_mbytes_per_sec": 0, 00:15:39.354 "w_mbytes_per_sec": 0 00:15:39.354 }, 00:15:39.354 "claimed": false, 00:15:39.354 "zoned": false, 00:15:39.354 "supported_io_types": { 00:15:39.354 "read": true, 00:15:39.354 "write": true, 00:15:39.354 "unmap": true, 00:15:39.355 "flush": true, 00:15:39.355 "reset": true, 00:15:39.355 "nvme_admin": false, 00:15:39.355 "nvme_io": false, 00:15:39.355 "nvme_io_md": false, 00:15:39.355 "write_zeroes": true, 00:15:39.355 "zcopy": true, 00:15:39.355 "get_zone_info": false, 00:15:39.355 "zone_management": false, 00:15:39.355 "zone_append": false, 00:15:39.355 "compare": false, 00:15:39.355 "compare_and_write": false, 00:15:39.355 "abort": true, 00:15:39.355 "seek_hole": false, 00:15:39.355 "seek_data": false, 00:15:39.355 "copy": true, 00:15:39.355 "nvme_iov_md": false 00:15:39.355 }, 00:15:39.355 "memory_domains": [ 00:15:39.355 { 00:15:39.355 "dma_device_id": "system", 00:15:39.355 "dma_device_type": 1 00:15:39.355 }, 00:15:39.355 { 00:15:39.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.355 "dma_device_type": 2 00:15:39.355 } 00:15:39.355 ], 00:15:39.355 "driver_specific": {} 00:15:39.355 } 00:15:39.355 ] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 BaseBdev3 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 [ 00:15:39.355 { 00:15:39.355 "name": "BaseBdev3", 00:15:39.355 "aliases": [ 00:15:39.355 "fba12300-f2b9-46a0-8e2f-409e465be783" 00:15:39.355 ], 00:15:39.355 "product_name": "Malloc disk", 00:15:39.355 "block_size": 512, 00:15:39.355 "num_blocks": 65536, 00:15:39.355 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:39.355 "assigned_rate_limits": { 00:15:39.355 "rw_ios_per_sec": 0, 00:15:39.355 "rw_mbytes_per_sec": 0, 00:15:39.355 "r_mbytes_per_sec": 0, 00:15:39.355 "w_mbytes_per_sec": 0 00:15:39.355 }, 00:15:39.355 "claimed": false, 00:15:39.355 "zoned": false, 00:15:39.355 "supported_io_types": { 00:15:39.355 "read": true, 00:15:39.355 "write": true, 00:15:39.355 "unmap": true, 00:15:39.355 "flush": true, 00:15:39.355 "reset": true, 00:15:39.355 "nvme_admin": false, 00:15:39.355 "nvme_io": false, 00:15:39.355 "nvme_io_md": false, 00:15:39.355 "write_zeroes": true, 00:15:39.355 "zcopy": true, 00:15:39.355 "get_zone_info": false, 00:15:39.355 "zone_management": false, 00:15:39.355 "zone_append": false, 00:15:39.355 "compare": false, 00:15:39.355 "compare_and_write": false, 00:15:39.355 "abort": true, 00:15:39.355 "seek_hole": false, 00:15:39.355 "seek_data": false, 00:15:39.355 "copy": true, 00:15:39.355 "nvme_iov_md": false 00:15:39.355 }, 00:15:39.355 "memory_domains": [ 00:15:39.355 { 00:15:39.355 "dma_device_id": "system", 00:15:39.355 "dma_device_type": 1 00:15:39.355 }, 00:15:39.355 { 00:15:39.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.355 "dma_device_type": 2 00:15:39.355 } 00:15:39.355 ], 00:15:39.355 "driver_specific": {} 00:15:39.355 } 00:15:39.355 ] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 BaseBdev4 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 [ 00:15:39.355 { 00:15:39.355 "name": "BaseBdev4", 00:15:39.355 "aliases": [ 00:15:39.355 "e3360f82-b4d2-4dfb-84af-cb605b0def41" 00:15:39.355 ], 00:15:39.355 "product_name": "Malloc disk", 00:15:39.355 "block_size": 512, 00:15:39.355 "num_blocks": 65536, 00:15:39.355 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:39.355 "assigned_rate_limits": { 00:15:39.355 "rw_ios_per_sec": 0, 00:15:39.355 "rw_mbytes_per_sec": 0, 00:15:39.355 "r_mbytes_per_sec": 0, 00:15:39.355 "w_mbytes_per_sec": 0 00:15:39.355 }, 00:15:39.355 "claimed": false, 00:15:39.355 "zoned": false, 00:15:39.355 "supported_io_types": { 00:15:39.355 "read": true, 00:15:39.355 "write": true, 00:15:39.355 "unmap": true, 00:15:39.355 "flush": true, 00:15:39.355 "reset": true, 00:15:39.355 "nvme_admin": false, 00:15:39.355 "nvme_io": false, 00:15:39.355 "nvme_io_md": false, 00:15:39.355 "write_zeroes": true, 00:15:39.355 "zcopy": true, 00:15:39.355 "get_zone_info": false, 00:15:39.355 "zone_management": false, 00:15:39.355 "zone_append": false, 00:15:39.355 "compare": false, 00:15:39.355 "compare_and_write": false, 00:15:39.355 "abort": true, 00:15:39.355 "seek_hole": false, 00:15:39.355 "seek_data": false, 00:15:39.355 "copy": true, 00:15:39.355 "nvme_iov_md": false 00:15:39.355 }, 00:15:39.355 "memory_domains": [ 00:15:39.355 { 00:15:39.355 "dma_device_id": "system", 00:15:39.355 "dma_device_type": 1 00:15:39.355 }, 00:15:39.355 { 00:15:39.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.355 "dma_device_type": 2 00:15:39.355 } 00:15:39.355 ], 00:15:39.355 "driver_specific": {} 00:15:39.355 } 00:15:39.355 ] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.355 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 [2024-11-20 03:22:28.905837] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.355 [2024-11-20 03:22:28.905919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.355 [2024-11-20 03:22:28.905945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.355 [2024-11-20 03:22:28.907918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.355 [2024-11-20 03:22:28.907986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.356 "name": "Existed_Raid", 00:15:39.356 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:39.356 "strip_size_kb": 64, 00:15:39.356 "state": "configuring", 00:15:39.356 "raid_level": "raid5f", 00:15:39.356 "superblock": true, 00:15:39.356 "num_base_bdevs": 4, 00:15:39.356 "num_base_bdevs_discovered": 3, 00:15:39.356 "num_base_bdevs_operational": 4, 00:15:39.356 "base_bdevs_list": [ 00:15:39.356 { 00:15:39.356 "name": "BaseBdev1", 00:15:39.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.356 "is_configured": false, 00:15:39.356 "data_offset": 0, 00:15:39.356 "data_size": 0 00:15:39.356 }, 00:15:39.356 { 00:15:39.356 "name": "BaseBdev2", 00:15:39.356 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:39.356 "is_configured": true, 00:15:39.356 "data_offset": 2048, 00:15:39.356 "data_size": 63488 00:15:39.356 }, 00:15:39.356 { 00:15:39.356 "name": "BaseBdev3", 00:15:39.356 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:39.356 "is_configured": true, 00:15:39.356 "data_offset": 2048, 00:15:39.356 "data_size": 63488 00:15:39.356 }, 00:15:39.356 { 00:15:39.356 "name": "BaseBdev4", 00:15:39.356 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:39.356 "is_configured": true, 00:15:39.356 "data_offset": 2048, 00:15:39.356 "data_size": 63488 00:15:39.356 } 00:15:39.356 ] 00:15:39.356 }' 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.356 03:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.925 [2024-11-20 03:22:29.389017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.925 "name": "Existed_Raid", 00:15:39.925 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:39.925 "strip_size_kb": 64, 00:15:39.925 "state": "configuring", 00:15:39.925 "raid_level": "raid5f", 00:15:39.925 "superblock": true, 00:15:39.925 "num_base_bdevs": 4, 00:15:39.925 "num_base_bdevs_discovered": 2, 00:15:39.925 "num_base_bdevs_operational": 4, 00:15:39.925 "base_bdevs_list": [ 00:15:39.925 { 00:15:39.925 "name": "BaseBdev1", 00:15:39.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.925 "is_configured": false, 00:15:39.925 "data_offset": 0, 00:15:39.925 "data_size": 0 00:15:39.925 }, 00:15:39.925 { 00:15:39.925 "name": null, 00:15:39.925 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:39.925 "is_configured": false, 00:15:39.925 "data_offset": 0, 00:15:39.925 "data_size": 63488 00:15:39.925 }, 00:15:39.925 { 00:15:39.925 "name": "BaseBdev3", 00:15:39.925 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:39.925 "is_configured": true, 00:15:39.925 "data_offset": 2048, 00:15:39.925 "data_size": 63488 00:15:39.925 }, 00:15:39.925 { 00:15:39.925 "name": "BaseBdev4", 00:15:39.925 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:39.925 "is_configured": true, 00:15:39.925 "data_offset": 2048, 00:15:39.925 "data_size": 63488 00:15:39.925 } 00:15:39.925 ] 00:15:39.925 }' 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.925 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.184 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.184 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.184 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.184 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.184 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.444 [2024-11-20 03:22:29.888527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.444 BaseBdev1 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.444 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.444 [ 00:15:40.444 { 00:15:40.444 "name": "BaseBdev1", 00:15:40.444 "aliases": [ 00:15:40.444 "87a7deea-5104-4b8e-a17d-d3493896187b" 00:15:40.444 ], 00:15:40.444 "product_name": "Malloc disk", 00:15:40.444 "block_size": 512, 00:15:40.444 "num_blocks": 65536, 00:15:40.444 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:40.444 "assigned_rate_limits": { 00:15:40.444 "rw_ios_per_sec": 0, 00:15:40.444 "rw_mbytes_per_sec": 0, 00:15:40.444 "r_mbytes_per_sec": 0, 00:15:40.444 "w_mbytes_per_sec": 0 00:15:40.444 }, 00:15:40.444 "claimed": true, 00:15:40.445 "claim_type": "exclusive_write", 00:15:40.445 "zoned": false, 00:15:40.445 "supported_io_types": { 00:15:40.445 "read": true, 00:15:40.445 "write": true, 00:15:40.445 "unmap": true, 00:15:40.445 "flush": true, 00:15:40.445 "reset": true, 00:15:40.445 "nvme_admin": false, 00:15:40.445 "nvme_io": false, 00:15:40.445 "nvme_io_md": false, 00:15:40.445 "write_zeroes": true, 00:15:40.445 "zcopy": true, 00:15:40.445 "get_zone_info": false, 00:15:40.445 "zone_management": false, 00:15:40.445 "zone_append": false, 00:15:40.445 "compare": false, 00:15:40.445 "compare_and_write": false, 00:15:40.445 "abort": true, 00:15:40.445 "seek_hole": false, 00:15:40.445 "seek_data": false, 00:15:40.445 "copy": true, 00:15:40.445 "nvme_iov_md": false 00:15:40.445 }, 00:15:40.445 "memory_domains": [ 00:15:40.445 { 00:15:40.445 "dma_device_id": "system", 00:15:40.445 "dma_device_type": 1 00:15:40.445 }, 00:15:40.445 { 00:15:40.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.445 "dma_device_type": 2 00:15:40.445 } 00:15:40.445 ], 00:15:40.445 "driver_specific": {} 00:15:40.445 } 00:15:40.445 ] 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.445 "name": "Existed_Raid", 00:15:40.445 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:40.445 "strip_size_kb": 64, 00:15:40.445 "state": "configuring", 00:15:40.445 "raid_level": "raid5f", 00:15:40.445 "superblock": true, 00:15:40.445 "num_base_bdevs": 4, 00:15:40.445 "num_base_bdevs_discovered": 3, 00:15:40.445 "num_base_bdevs_operational": 4, 00:15:40.445 "base_bdevs_list": [ 00:15:40.445 { 00:15:40.445 "name": "BaseBdev1", 00:15:40.445 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:40.445 "is_configured": true, 00:15:40.445 "data_offset": 2048, 00:15:40.445 "data_size": 63488 00:15:40.445 }, 00:15:40.445 { 00:15:40.445 "name": null, 00:15:40.445 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:40.445 "is_configured": false, 00:15:40.445 "data_offset": 0, 00:15:40.445 "data_size": 63488 00:15:40.445 }, 00:15:40.445 { 00:15:40.445 "name": "BaseBdev3", 00:15:40.445 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:40.445 "is_configured": true, 00:15:40.445 "data_offset": 2048, 00:15:40.445 "data_size": 63488 00:15:40.445 }, 00:15:40.445 { 00:15:40.445 "name": "BaseBdev4", 00:15:40.445 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:40.445 "is_configured": true, 00:15:40.445 "data_offset": 2048, 00:15:40.445 "data_size": 63488 00:15:40.445 } 00:15:40.445 ] 00:15:40.445 }' 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.445 03:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 [2024-11-20 03:22:30.431706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.029 "name": "Existed_Raid", 00:15:41.029 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:41.029 "strip_size_kb": 64, 00:15:41.029 "state": "configuring", 00:15:41.029 "raid_level": "raid5f", 00:15:41.029 "superblock": true, 00:15:41.029 "num_base_bdevs": 4, 00:15:41.029 "num_base_bdevs_discovered": 2, 00:15:41.029 "num_base_bdevs_operational": 4, 00:15:41.029 "base_bdevs_list": [ 00:15:41.029 { 00:15:41.029 "name": "BaseBdev1", 00:15:41.029 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:41.029 "is_configured": true, 00:15:41.029 "data_offset": 2048, 00:15:41.029 "data_size": 63488 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "name": null, 00:15:41.029 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:41.029 "is_configured": false, 00:15:41.029 "data_offset": 0, 00:15:41.029 "data_size": 63488 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "name": null, 00:15:41.029 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:41.029 "is_configured": false, 00:15:41.029 "data_offset": 0, 00:15:41.029 "data_size": 63488 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "name": "BaseBdev4", 00:15:41.029 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:41.029 "is_configured": true, 00:15:41.029 "data_offset": 2048, 00:15:41.029 "data_size": 63488 00:15:41.029 } 00:15:41.029 ] 00:15:41.029 }' 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.029 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.301 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.301 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.301 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.301 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.560 [2024-11-20 03:22:30.946805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.560 03:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.560 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.560 "name": "Existed_Raid", 00:15:41.560 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:41.560 "strip_size_kb": 64, 00:15:41.560 "state": "configuring", 00:15:41.560 "raid_level": "raid5f", 00:15:41.560 "superblock": true, 00:15:41.560 "num_base_bdevs": 4, 00:15:41.560 "num_base_bdevs_discovered": 3, 00:15:41.560 "num_base_bdevs_operational": 4, 00:15:41.560 "base_bdevs_list": [ 00:15:41.560 { 00:15:41.560 "name": "BaseBdev1", 00:15:41.560 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:41.560 "is_configured": true, 00:15:41.560 "data_offset": 2048, 00:15:41.560 "data_size": 63488 00:15:41.560 }, 00:15:41.560 { 00:15:41.560 "name": null, 00:15:41.560 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:41.560 "is_configured": false, 00:15:41.560 "data_offset": 0, 00:15:41.560 "data_size": 63488 00:15:41.560 }, 00:15:41.560 { 00:15:41.560 "name": "BaseBdev3", 00:15:41.560 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:41.560 "is_configured": true, 00:15:41.560 "data_offset": 2048, 00:15:41.560 "data_size": 63488 00:15:41.560 }, 00:15:41.560 { 00:15:41.560 "name": "BaseBdev4", 00:15:41.560 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:41.560 "is_configured": true, 00:15:41.560 "data_offset": 2048, 00:15:41.560 "data_size": 63488 00:15:41.560 } 00:15:41.560 ] 00:15:41.560 }' 00:15:41.560 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.560 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.819 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.079 [2024-11-20 03:22:31.453968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.079 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.079 "name": "Existed_Raid", 00:15:42.079 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:42.079 "strip_size_kb": 64, 00:15:42.079 "state": "configuring", 00:15:42.079 "raid_level": "raid5f", 00:15:42.079 "superblock": true, 00:15:42.079 "num_base_bdevs": 4, 00:15:42.079 "num_base_bdevs_discovered": 2, 00:15:42.079 "num_base_bdevs_operational": 4, 00:15:42.079 "base_bdevs_list": [ 00:15:42.079 { 00:15:42.079 "name": null, 00:15:42.079 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:42.079 "is_configured": false, 00:15:42.079 "data_offset": 0, 00:15:42.079 "data_size": 63488 00:15:42.079 }, 00:15:42.079 { 00:15:42.080 "name": null, 00:15:42.080 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:42.080 "is_configured": false, 00:15:42.080 "data_offset": 0, 00:15:42.080 "data_size": 63488 00:15:42.080 }, 00:15:42.080 { 00:15:42.080 "name": "BaseBdev3", 00:15:42.080 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:42.080 "is_configured": true, 00:15:42.080 "data_offset": 2048, 00:15:42.080 "data_size": 63488 00:15:42.080 }, 00:15:42.080 { 00:15:42.080 "name": "BaseBdev4", 00:15:42.080 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:42.080 "is_configured": true, 00:15:42.080 "data_offset": 2048, 00:15:42.080 "data_size": 63488 00:15:42.080 } 00:15:42.080 ] 00:15:42.080 }' 00:15:42.080 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.080 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.649 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.649 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.649 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.649 03:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.649 03:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.649 [2024-11-20 03:22:32.039436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.649 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.649 "name": "Existed_Raid", 00:15:42.649 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:42.649 "strip_size_kb": 64, 00:15:42.649 "state": "configuring", 00:15:42.649 "raid_level": "raid5f", 00:15:42.649 "superblock": true, 00:15:42.649 "num_base_bdevs": 4, 00:15:42.649 "num_base_bdevs_discovered": 3, 00:15:42.649 "num_base_bdevs_operational": 4, 00:15:42.649 "base_bdevs_list": [ 00:15:42.650 { 00:15:42.650 "name": null, 00:15:42.650 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:42.650 "is_configured": false, 00:15:42.650 "data_offset": 0, 00:15:42.650 "data_size": 63488 00:15:42.650 }, 00:15:42.650 { 00:15:42.650 "name": "BaseBdev2", 00:15:42.650 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:42.650 "is_configured": true, 00:15:42.650 "data_offset": 2048, 00:15:42.650 "data_size": 63488 00:15:42.650 }, 00:15:42.650 { 00:15:42.650 "name": "BaseBdev3", 00:15:42.650 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:42.650 "is_configured": true, 00:15:42.650 "data_offset": 2048, 00:15:42.650 "data_size": 63488 00:15:42.650 }, 00:15:42.650 { 00:15:42.650 "name": "BaseBdev4", 00:15:42.650 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:42.650 "is_configured": true, 00:15:42.650 "data_offset": 2048, 00:15:42.650 "data_size": 63488 00:15:42.650 } 00:15:42.650 ] 00:15:42.650 }' 00:15:42.650 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.650 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.910 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 87a7deea-5104-4b8e-a17d-d3493896187b 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 [2024-11-20 03:22:32.619003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:43.171 [2024-11-20 03:22:32.619337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:43.171 [2024-11-20 03:22:32.619387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:43.171 [2024-11-20 03:22:32.619689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:43.171 NewBaseBdev 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 [2024-11-20 03:22:32.627112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:43.171 [2024-11-20 03:22:32.627171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:43.171 [2024-11-20 03:22:32.627465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 [ 00:15:43.171 { 00:15:43.171 "name": "NewBaseBdev", 00:15:43.171 "aliases": [ 00:15:43.171 "87a7deea-5104-4b8e-a17d-d3493896187b" 00:15:43.171 ], 00:15:43.171 "product_name": "Malloc disk", 00:15:43.171 "block_size": 512, 00:15:43.171 "num_blocks": 65536, 00:15:43.171 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:43.171 "assigned_rate_limits": { 00:15:43.171 "rw_ios_per_sec": 0, 00:15:43.171 "rw_mbytes_per_sec": 0, 00:15:43.171 "r_mbytes_per_sec": 0, 00:15:43.171 "w_mbytes_per_sec": 0 00:15:43.171 }, 00:15:43.171 "claimed": true, 00:15:43.171 "claim_type": "exclusive_write", 00:15:43.171 "zoned": false, 00:15:43.171 "supported_io_types": { 00:15:43.171 "read": true, 00:15:43.171 "write": true, 00:15:43.171 "unmap": true, 00:15:43.171 "flush": true, 00:15:43.171 "reset": true, 00:15:43.171 "nvme_admin": false, 00:15:43.171 "nvme_io": false, 00:15:43.171 "nvme_io_md": false, 00:15:43.171 "write_zeroes": true, 00:15:43.171 "zcopy": true, 00:15:43.171 "get_zone_info": false, 00:15:43.171 "zone_management": false, 00:15:43.171 "zone_append": false, 00:15:43.171 "compare": false, 00:15:43.171 "compare_and_write": false, 00:15:43.171 "abort": true, 00:15:43.171 "seek_hole": false, 00:15:43.171 "seek_data": false, 00:15:43.171 "copy": true, 00:15:43.171 "nvme_iov_md": false 00:15:43.171 }, 00:15:43.171 "memory_domains": [ 00:15:43.171 { 00:15:43.171 "dma_device_id": "system", 00:15:43.171 "dma_device_type": 1 00:15:43.171 }, 00:15:43.171 { 00:15:43.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.171 "dma_device_type": 2 00:15:43.171 } 00:15:43.171 ], 00:15:43.171 "driver_specific": {} 00:15:43.171 } 00:15:43.171 ] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.171 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.171 "name": "Existed_Raid", 00:15:43.171 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:43.171 "strip_size_kb": 64, 00:15:43.171 "state": "online", 00:15:43.171 "raid_level": "raid5f", 00:15:43.171 "superblock": true, 00:15:43.171 "num_base_bdevs": 4, 00:15:43.171 "num_base_bdevs_discovered": 4, 00:15:43.171 "num_base_bdevs_operational": 4, 00:15:43.171 "base_bdevs_list": [ 00:15:43.171 { 00:15:43.171 "name": "NewBaseBdev", 00:15:43.171 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:43.171 "is_configured": true, 00:15:43.171 "data_offset": 2048, 00:15:43.171 "data_size": 63488 00:15:43.171 }, 00:15:43.171 { 00:15:43.171 "name": "BaseBdev2", 00:15:43.171 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:43.171 "is_configured": true, 00:15:43.171 "data_offset": 2048, 00:15:43.171 "data_size": 63488 00:15:43.171 }, 00:15:43.171 { 00:15:43.171 "name": "BaseBdev3", 00:15:43.171 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:43.171 "is_configured": true, 00:15:43.171 "data_offset": 2048, 00:15:43.171 "data_size": 63488 00:15:43.172 }, 00:15:43.172 { 00:15:43.172 "name": "BaseBdev4", 00:15:43.172 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:43.172 "is_configured": true, 00:15:43.172 "data_offset": 2048, 00:15:43.172 "data_size": 63488 00:15:43.172 } 00:15:43.172 ] 00:15:43.172 }' 00:15:43.172 03:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.172 03:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.741 [2024-11-20 03:22:33.095120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.741 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.741 "name": "Existed_Raid", 00:15:43.741 "aliases": [ 00:15:43.741 "087e27ca-668c-4f25-93e9-532d329e944b" 00:15:43.741 ], 00:15:43.741 "product_name": "Raid Volume", 00:15:43.741 "block_size": 512, 00:15:43.741 "num_blocks": 190464, 00:15:43.741 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:43.741 "assigned_rate_limits": { 00:15:43.741 "rw_ios_per_sec": 0, 00:15:43.741 "rw_mbytes_per_sec": 0, 00:15:43.741 "r_mbytes_per_sec": 0, 00:15:43.741 "w_mbytes_per_sec": 0 00:15:43.741 }, 00:15:43.741 "claimed": false, 00:15:43.741 "zoned": false, 00:15:43.741 "supported_io_types": { 00:15:43.741 "read": true, 00:15:43.741 "write": true, 00:15:43.741 "unmap": false, 00:15:43.741 "flush": false, 00:15:43.741 "reset": true, 00:15:43.741 "nvme_admin": false, 00:15:43.741 "nvme_io": false, 00:15:43.741 "nvme_io_md": false, 00:15:43.741 "write_zeroes": true, 00:15:43.741 "zcopy": false, 00:15:43.741 "get_zone_info": false, 00:15:43.741 "zone_management": false, 00:15:43.741 "zone_append": false, 00:15:43.741 "compare": false, 00:15:43.741 "compare_and_write": false, 00:15:43.741 "abort": false, 00:15:43.741 "seek_hole": false, 00:15:43.741 "seek_data": false, 00:15:43.741 "copy": false, 00:15:43.741 "nvme_iov_md": false 00:15:43.741 }, 00:15:43.741 "driver_specific": { 00:15:43.741 "raid": { 00:15:43.741 "uuid": "087e27ca-668c-4f25-93e9-532d329e944b", 00:15:43.741 "strip_size_kb": 64, 00:15:43.741 "state": "online", 00:15:43.741 "raid_level": "raid5f", 00:15:43.741 "superblock": true, 00:15:43.741 "num_base_bdevs": 4, 00:15:43.741 "num_base_bdevs_discovered": 4, 00:15:43.741 "num_base_bdevs_operational": 4, 00:15:43.741 "base_bdevs_list": [ 00:15:43.741 { 00:15:43.741 "name": "NewBaseBdev", 00:15:43.741 "uuid": "87a7deea-5104-4b8e-a17d-d3493896187b", 00:15:43.741 "is_configured": true, 00:15:43.741 "data_offset": 2048, 00:15:43.741 "data_size": 63488 00:15:43.741 }, 00:15:43.741 { 00:15:43.741 "name": "BaseBdev2", 00:15:43.742 "uuid": "3604d568-e928-4803-b84e-40e07bebaa69", 00:15:43.742 "is_configured": true, 00:15:43.742 "data_offset": 2048, 00:15:43.742 "data_size": 63488 00:15:43.742 }, 00:15:43.742 { 00:15:43.742 "name": "BaseBdev3", 00:15:43.742 "uuid": "fba12300-f2b9-46a0-8e2f-409e465be783", 00:15:43.742 "is_configured": true, 00:15:43.742 "data_offset": 2048, 00:15:43.742 "data_size": 63488 00:15:43.742 }, 00:15:43.742 { 00:15:43.742 "name": "BaseBdev4", 00:15:43.742 "uuid": "e3360f82-b4d2-4dfb-84af-cb605b0def41", 00:15:43.742 "is_configured": true, 00:15:43.742 "data_offset": 2048, 00:15:43.742 "data_size": 63488 00:15:43.742 } 00:15:43.742 ] 00:15:43.742 } 00:15:43.742 } 00:15:43.742 }' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:43.742 BaseBdev2 00:15:43.742 BaseBdev3 00:15:43.742 BaseBdev4' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.742 [2024-11-20 03:22:33.362467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.742 [2024-11-20 03:22:33.362498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.742 [2024-11-20 03:22:33.362570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.742 [2024-11-20 03:22:33.362910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.742 [2024-11-20 03:22:33.362928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83242 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83242 ']' 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83242 00:15:43.742 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83242 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.002 killing process with pid 83242 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83242' 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83242 00:15:44.002 [2024-11-20 03:22:33.410602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.002 03:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83242 00:15:44.262 [2024-11-20 03:22:33.806395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.641 03:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:45.641 00:15:45.641 real 0m11.473s 00:15:45.641 user 0m18.284s 00:15:45.641 sys 0m2.016s 00:15:45.641 03:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.641 03:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.641 ************************************ 00:15:45.641 END TEST raid5f_state_function_test_sb 00:15:45.641 ************************************ 00:15:45.641 03:22:34 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:45.641 03:22:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:45.641 03:22:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.641 03:22:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.641 ************************************ 00:15:45.641 START TEST raid5f_superblock_test 00:15:45.641 ************************************ 00:15:45.641 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:45.641 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:45.641 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:45.641 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:45.641 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:45.641 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83919 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83919 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83919 ']' 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.642 03:22:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.642 [2024-11-20 03:22:35.060111] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:45.642 [2024-11-20 03:22:35.060230] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83919 ] 00:15:45.642 [2024-11-20 03:22:35.212868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.901 [2024-11-20 03:22:35.324280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.901 [2024-11-20 03:22:35.502635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.901 [2024-11-20 03:22:35.502692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.470 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.471 malloc1 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.471 [2024-11-20 03:22:35.960651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.471 [2024-11-20 03:22:35.960714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.471 [2024-11-20 03:22:35.960738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:46.471 [2024-11-20 03:22:35.960747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.471 [2024-11-20 03:22:35.962813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.471 [2024-11-20 03:22:35.962850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.471 pt1 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.471 malloc2 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.471 [2024-11-20 03:22:36.013989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.471 [2024-11-20 03:22:36.014100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.471 [2024-11-20 03:22:36.014128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:46.471 [2024-11-20 03:22:36.014139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.471 [2024-11-20 03:22:36.016490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.471 [2024-11-20 03:22:36.016530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.471 pt2 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.471 malloc3 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.471 [2024-11-20 03:22:36.083359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.471 [2024-11-20 03:22:36.083457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.471 [2024-11-20 03:22:36.083497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:46.471 [2024-11-20 03:22:36.083526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.471 [2024-11-20 03:22:36.085555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.471 [2024-11-20 03:22:36.085630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.471 pt3 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.471 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.731 malloc4 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.731 [2024-11-20 03:22:36.143161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:46.731 [2024-11-20 03:22:36.143258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.731 [2024-11-20 03:22:36.143294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:46.731 [2024-11-20 03:22:36.143326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.731 [2024-11-20 03:22:36.145377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.731 [2024-11-20 03:22:36.145443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:46.731 pt4 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.731 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.731 [2024-11-20 03:22:36.155169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.731 [2024-11-20 03:22:36.156976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.731 [2024-11-20 03:22:36.157076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:46.731 [2024-11-20 03:22:36.157140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:46.731 [2024-11-20 03:22:36.157351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:46.731 [2024-11-20 03:22:36.157367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:46.731 [2024-11-20 03:22:36.157607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:46.731 [2024-11-20 03:22:36.164865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:46.732 [2024-11-20 03:22:36.164887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:46.732 [2024-11-20 03:22:36.165071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.732 "name": "raid_bdev1", 00:15:46.732 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:46.732 "strip_size_kb": 64, 00:15:46.732 "state": "online", 00:15:46.732 "raid_level": "raid5f", 00:15:46.732 "superblock": true, 00:15:46.732 "num_base_bdevs": 4, 00:15:46.732 "num_base_bdevs_discovered": 4, 00:15:46.732 "num_base_bdevs_operational": 4, 00:15:46.732 "base_bdevs_list": [ 00:15:46.732 { 00:15:46.732 "name": "pt1", 00:15:46.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.732 "is_configured": true, 00:15:46.732 "data_offset": 2048, 00:15:46.732 "data_size": 63488 00:15:46.732 }, 00:15:46.732 { 00:15:46.732 "name": "pt2", 00:15:46.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.732 "is_configured": true, 00:15:46.732 "data_offset": 2048, 00:15:46.732 "data_size": 63488 00:15:46.732 }, 00:15:46.732 { 00:15:46.732 "name": "pt3", 00:15:46.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.732 "is_configured": true, 00:15:46.732 "data_offset": 2048, 00:15:46.732 "data_size": 63488 00:15:46.732 }, 00:15:46.732 { 00:15:46.732 "name": "pt4", 00:15:46.732 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.732 "is_configured": true, 00:15:46.732 "data_offset": 2048, 00:15:46.732 "data_size": 63488 00:15:46.732 } 00:15:46.732 ] 00:15:46.732 }' 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.732 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.991 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.991 [2024-11-20 03:22:36.608979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.251 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.251 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.251 "name": "raid_bdev1", 00:15:47.251 "aliases": [ 00:15:47.251 "aeb6e57d-a815-4d4c-910b-d0caaef99893" 00:15:47.251 ], 00:15:47.251 "product_name": "Raid Volume", 00:15:47.251 "block_size": 512, 00:15:47.251 "num_blocks": 190464, 00:15:47.251 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:47.251 "assigned_rate_limits": { 00:15:47.251 "rw_ios_per_sec": 0, 00:15:47.251 "rw_mbytes_per_sec": 0, 00:15:47.251 "r_mbytes_per_sec": 0, 00:15:47.251 "w_mbytes_per_sec": 0 00:15:47.251 }, 00:15:47.251 "claimed": false, 00:15:47.251 "zoned": false, 00:15:47.251 "supported_io_types": { 00:15:47.251 "read": true, 00:15:47.251 "write": true, 00:15:47.251 "unmap": false, 00:15:47.251 "flush": false, 00:15:47.251 "reset": true, 00:15:47.251 "nvme_admin": false, 00:15:47.251 "nvme_io": false, 00:15:47.251 "nvme_io_md": false, 00:15:47.251 "write_zeroes": true, 00:15:47.251 "zcopy": false, 00:15:47.252 "get_zone_info": false, 00:15:47.252 "zone_management": false, 00:15:47.252 "zone_append": false, 00:15:47.252 "compare": false, 00:15:47.252 "compare_and_write": false, 00:15:47.252 "abort": false, 00:15:47.252 "seek_hole": false, 00:15:47.252 "seek_data": false, 00:15:47.252 "copy": false, 00:15:47.252 "nvme_iov_md": false 00:15:47.252 }, 00:15:47.252 "driver_specific": { 00:15:47.252 "raid": { 00:15:47.252 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:47.252 "strip_size_kb": 64, 00:15:47.252 "state": "online", 00:15:47.252 "raid_level": "raid5f", 00:15:47.252 "superblock": true, 00:15:47.252 "num_base_bdevs": 4, 00:15:47.252 "num_base_bdevs_discovered": 4, 00:15:47.252 "num_base_bdevs_operational": 4, 00:15:47.252 "base_bdevs_list": [ 00:15:47.252 { 00:15:47.252 "name": "pt1", 00:15:47.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 2048, 00:15:47.252 "data_size": 63488 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "pt2", 00:15:47.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 2048, 00:15:47.252 "data_size": 63488 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "pt3", 00:15:47.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 2048, 00:15:47.252 "data_size": 63488 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "pt4", 00:15:47.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 2048, 00:15:47.252 "data_size": 63488 00:15:47.252 } 00:15:47.252 ] 00:15:47.252 } 00:15:47.252 } 00:15:47.252 }' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:47.252 pt2 00:15:47.252 pt3 00:15:47.252 pt4' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.252 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.512 [2024-11-20 03:22:36.920400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aeb6e57d-a815-4d4c-910b-d0caaef99893 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aeb6e57d-a815-4d4c-910b-d0caaef99893 ']' 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.512 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.512 [2024-11-20 03:22:36.968140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.512 [2024-11-20 03:22:36.968213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.512 [2024-11-20 03:22:36.968305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.513 [2024-11-20 03:22:36.968390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.513 [2024-11-20 03:22:36.968405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:47.513 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.513 03:22:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:47.513 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 [2024-11-20 03:22:37.119882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:47.513 [2024-11-20 03:22:37.121679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:47.513 [2024-11-20 03:22:37.121774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:47.513 [2024-11-20 03:22:37.121812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:47.513 [2024-11-20 03:22:37.121864] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:47.513 [2024-11-20 03:22:37.121908] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:47.513 [2024-11-20 03:22:37.121927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:47.513 [2024-11-20 03:22:37.121946] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:47.513 [2024-11-20 03:22:37.121959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.513 [2024-11-20 03:22:37.121970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:47.513 request: 00:15:47.513 { 00:15:47.513 "name": "raid_bdev1", 00:15:47.513 "raid_level": "raid5f", 00:15:47.513 "base_bdevs": [ 00:15:47.513 "malloc1", 00:15:47.513 "malloc2", 00:15:47.513 "malloc3", 00:15:47.513 "malloc4" 00:15:47.513 ], 00:15:47.513 "strip_size_kb": 64, 00:15:47.513 "superblock": false, 00:15:47.513 "method": "bdev_raid_create", 00:15:47.513 "req_id": 1 00:15:47.513 } 00:15:47.513 Got JSON-RPC error response 00:15:47.513 response: 00:15:47.513 { 00:15:47.513 "code": -17, 00:15:47.513 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:47.513 } 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.513 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.773 [2024-11-20 03:22:37.171778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:47.773 [2024-11-20 03:22:37.171869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.773 [2024-11-20 03:22:37.171902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:47.773 [2024-11-20 03:22:37.171931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.773 [2024-11-20 03:22:37.174086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.773 [2024-11-20 03:22:37.174161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:47.773 [2024-11-20 03:22:37.174254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:47.773 [2024-11-20 03:22:37.174345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.773 pt1 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.773 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.774 "name": "raid_bdev1", 00:15:47.774 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:47.774 "strip_size_kb": 64, 00:15:47.774 "state": "configuring", 00:15:47.774 "raid_level": "raid5f", 00:15:47.774 "superblock": true, 00:15:47.774 "num_base_bdevs": 4, 00:15:47.774 "num_base_bdevs_discovered": 1, 00:15:47.774 "num_base_bdevs_operational": 4, 00:15:47.774 "base_bdevs_list": [ 00:15:47.774 { 00:15:47.774 "name": "pt1", 00:15:47.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.774 "is_configured": true, 00:15:47.774 "data_offset": 2048, 00:15:47.774 "data_size": 63488 00:15:47.774 }, 00:15:47.774 { 00:15:47.774 "name": null, 00:15:47.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.774 "is_configured": false, 00:15:47.774 "data_offset": 2048, 00:15:47.774 "data_size": 63488 00:15:47.774 }, 00:15:47.774 { 00:15:47.774 "name": null, 00:15:47.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.774 "is_configured": false, 00:15:47.774 "data_offset": 2048, 00:15:47.774 "data_size": 63488 00:15:47.774 }, 00:15:47.774 { 00:15:47.774 "name": null, 00:15:47.774 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.774 "is_configured": false, 00:15:47.774 "data_offset": 2048, 00:15:47.774 "data_size": 63488 00:15:47.774 } 00:15:47.774 ] 00:15:47.774 }' 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.774 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.034 [2024-11-20 03:22:37.635017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.034 [2024-11-20 03:22:37.635091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.034 [2024-11-20 03:22:37.635112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:48.034 [2024-11-20 03:22:37.635122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.034 [2024-11-20 03:22:37.635554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.034 [2024-11-20 03:22:37.635578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.034 [2024-11-20 03:22:37.635669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.034 [2024-11-20 03:22:37.635699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.034 pt2 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.034 [2024-11-20 03:22:37.643022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.034 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.294 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.294 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.294 "name": "raid_bdev1", 00:15:48.294 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:48.294 "strip_size_kb": 64, 00:15:48.294 "state": "configuring", 00:15:48.294 "raid_level": "raid5f", 00:15:48.294 "superblock": true, 00:15:48.294 "num_base_bdevs": 4, 00:15:48.294 "num_base_bdevs_discovered": 1, 00:15:48.294 "num_base_bdevs_operational": 4, 00:15:48.294 "base_bdevs_list": [ 00:15:48.294 { 00:15:48.294 "name": "pt1", 00:15:48.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.294 "is_configured": true, 00:15:48.294 "data_offset": 2048, 00:15:48.294 "data_size": 63488 00:15:48.294 }, 00:15:48.294 { 00:15:48.294 "name": null, 00:15:48.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.294 "is_configured": false, 00:15:48.294 "data_offset": 0, 00:15:48.294 "data_size": 63488 00:15:48.294 }, 00:15:48.294 { 00:15:48.294 "name": null, 00:15:48.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.294 "is_configured": false, 00:15:48.294 "data_offset": 2048, 00:15:48.294 "data_size": 63488 00:15:48.294 }, 00:15:48.294 { 00:15:48.294 "name": null, 00:15:48.294 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.294 "is_configured": false, 00:15:48.294 "data_offset": 2048, 00:15:48.294 "data_size": 63488 00:15:48.294 } 00:15:48.294 ] 00:15:48.294 }' 00:15:48.294 03:22:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.294 03:22:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.554 [2024-11-20 03:22:38.074287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.554 [2024-11-20 03:22:38.074414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.554 [2024-11-20 03:22:38.074457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:48.554 [2024-11-20 03:22:38.074542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.554 [2024-11-20 03:22:38.075072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.554 [2024-11-20 03:22:38.075134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.554 [2024-11-20 03:22:38.075245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.554 [2024-11-20 03:22:38.075298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.554 pt2 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.554 [2024-11-20 03:22:38.086224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.554 [2024-11-20 03:22:38.086303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.554 [2024-11-20 03:22:38.086338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:48.554 [2024-11-20 03:22:38.086364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.554 [2024-11-20 03:22:38.086818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.554 [2024-11-20 03:22:38.086880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.554 [2024-11-20 03:22:38.086980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:48.554 [2024-11-20 03:22:38.087031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.554 pt3 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.554 [2024-11-20 03:22:38.098184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:48.554 [2024-11-20 03:22:38.098227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.554 [2024-11-20 03:22:38.098248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:48.554 [2024-11-20 03:22:38.098255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.554 [2024-11-20 03:22:38.098670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.554 [2024-11-20 03:22:38.098690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:48.554 [2024-11-20 03:22:38.098757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:48.554 [2024-11-20 03:22:38.098776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:48.554 [2024-11-20 03:22:38.098906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:48.554 [2024-11-20 03:22:38.098915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:48.554 [2024-11-20 03:22:38.099146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:48.554 [2024-11-20 03:22:38.106067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:48.554 [2024-11-20 03:22:38.106089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:48.554 [2024-11-20 03:22:38.106262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.554 pt4 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.554 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.554 "name": "raid_bdev1", 00:15:48.554 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:48.554 "strip_size_kb": 64, 00:15:48.554 "state": "online", 00:15:48.554 "raid_level": "raid5f", 00:15:48.554 "superblock": true, 00:15:48.554 "num_base_bdevs": 4, 00:15:48.554 "num_base_bdevs_discovered": 4, 00:15:48.554 "num_base_bdevs_operational": 4, 00:15:48.554 "base_bdevs_list": [ 00:15:48.554 { 00:15:48.554 "name": "pt1", 00:15:48.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.554 "is_configured": true, 00:15:48.554 "data_offset": 2048, 00:15:48.554 "data_size": 63488 00:15:48.554 }, 00:15:48.554 { 00:15:48.554 "name": "pt2", 00:15:48.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.554 "is_configured": true, 00:15:48.554 "data_offset": 2048, 00:15:48.554 "data_size": 63488 00:15:48.554 }, 00:15:48.554 { 00:15:48.554 "name": "pt3", 00:15:48.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.554 "is_configured": true, 00:15:48.554 "data_offset": 2048, 00:15:48.554 "data_size": 63488 00:15:48.554 }, 00:15:48.554 { 00:15:48.554 "name": "pt4", 00:15:48.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.554 "is_configured": true, 00:15:48.554 "data_offset": 2048, 00:15:48.554 "data_size": 63488 00:15:48.554 } 00:15:48.554 ] 00:15:48.554 }' 00:15:48.555 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.555 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.125 [2024-11-20 03:22:38.586573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.125 "name": "raid_bdev1", 00:15:49.125 "aliases": [ 00:15:49.125 "aeb6e57d-a815-4d4c-910b-d0caaef99893" 00:15:49.125 ], 00:15:49.125 "product_name": "Raid Volume", 00:15:49.125 "block_size": 512, 00:15:49.125 "num_blocks": 190464, 00:15:49.125 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:49.125 "assigned_rate_limits": { 00:15:49.125 "rw_ios_per_sec": 0, 00:15:49.125 "rw_mbytes_per_sec": 0, 00:15:49.125 "r_mbytes_per_sec": 0, 00:15:49.125 "w_mbytes_per_sec": 0 00:15:49.125 }, 00:15:49.125 "claimed": false, 00:15:49.125 "zoned": false, 00:15:49.125 "supported_io_types": { 00:15:49.125 "read": true, 00:15:49.125 "write": true, 00:15:49.125 "unmap": false, 00:15:49.125 "flush": false, 00:15:49.125 "reset": true, 00:15:49.125 "nvme_admin": false, 00:15:49.125 "nvme_io": false, 00:15:49.125 "nvme_io_md": false, 00:15:49.125 "write_zeroes": true, 00:15:49.125 "zcopy": false, 00:15:49.125 "get_zone_info": false, 00:15:49.125 "zone_management": false, 00:15:49.125 "zone_append": false, 00:15:49.125 "compare": false, 00:15:49.125 "compare_and_write": false, 00:15:49.125 "abort": false, 00:15:49.125 "seek_hole": false, 00:15:49.125 "seek_data": false, 00:15:49.125 "copy": false, 00:15:49.125 "nvme_iov_md": false 00:15:49.125 }, 00:15:49.125 "driver_specific": { 00:15:49.125 "raid": { 00:15:49.125 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:49.125 "strip_size_kb": 64, 00:15:49.125 "state": "online", 00:15:49.125 "raid_level": "raid5f", 00:15:49.125 "superblock": true, 00:15:49.125 "num_base_bdevs": 4, 00:15:49.125 "num_base_bdevs_discovered": 4, 00:15:49.125 "num_base_bdevs_operational": 4, 00:15:49.125 "base_bdevs_list": [ 00:15:49.125 { 00:15:49.125 "name": "pt1", 00:15:49.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.125 "is_configured": true, 00:15:49.125 "data_offset": 2048, 00:15:49.125 "data_size": 63488 00:15:49.125 }, 00:15:49.125 { 00:15:49.125 "name": "pt2", 00:15:49.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.125 "is_configured": true, 00:15:49.125 "data_offset": 2048, 00:15:49.125 "data_size": 63488 00:15:49.125 }, 00:15:49.125 { 00:15:49.125 "name": "pt3", 00:15:49.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.125 "is_configured": true, 00:15:49.125 "data_offset": 2048, 00:15:49.125 "data_size": 63488 00:15:49.125 }, 00:15:49.125 { 00:15:49.125 "name": "pt4", 00:15:49.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.125 "is_configured": true, 00:15:49.125 "data_offset": 2048, 00:15:49.125 "data_size": 63488 00:15:49.125 } 00:15:49.125 ] 00:15:49.125 } 00:15:49.125 } 00:15:49.125 }' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.125 pt2 00:15:49.125 pt3 00:15:49.125 pt4' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.125 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 [2024-11-20 03:22:38.877988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aeb6e57d-a815-4d4c-910b-d0caaef99893 '!=' aeb6e57d-a815-4d4c-910b-d0caaef99893 ']' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 [2024-11-20 03:22:38.921781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.385 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.386 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.386 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.386 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.386 "name": "raid_bdev1", 00:15:49.386 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:49.386 "strip_size_kb": 64, 00:15:49.386 "state": "online", 00:15:49.386 "raid_level": "raid5f", 00:15:49.386 "superblock": true, 00:15:49.386 "num_base_bdevs": 4, 00:15:49.386 "num_base_bdevs_discovered": 3, 00:15:49.386 "num_base_bdevs_operational": 3, 00:15:49.386 "base_bdevs_list": [ 00:15:49.386 { 00:15:49.386 "name": null, 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.386 "is_configured": false, 00:15:49.386 "data_offset": 0, 00:15:49.386 "data_size": 63488 00:15:49.386 }, 00:15:49.386 { 00:15:49.386 "name": "pt2", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 }, 00:15:49.386 { 00:15:49.386 "name": "pt3", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 }, 00:15:49.386 { 00:15:49.386 "name": "pt4", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 } 00:15:49.386 ] 00:15:49.386 }' 00:15:49.386 03:22:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.386 03:22:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 [2024-11-20 03:22:39.341071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.954 [2024-11-20 03:22:39.341150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.954 [2024-11-20 03:22:39.341251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.954 [2024-11-20 03:22:39.341346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.954 [2024-11-20 03:22:39.341392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.954 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.954 [2024-11-20 03:22:39.432854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.954 [2024-11-20 03:22:39.432904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.954 [2024-11-20 03:22:39.432922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:49.954 [2024-11-20 03:22:39.432931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.954 [2024-11-20 03:22:39.435201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.954 [2024-11-20 03:22:39.435242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.954 [2024-11-20 03:22:39.435333] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.954 [2024-11-20 03:22:39.435376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.954 pt2 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.955 "name": "raid_bdev1", 00:15:49.955 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:49.955 "strip_size_kb": 64, 00:15:49.955 "state": "configuring", 00:15:49.955 "raid_level": "raid5f", 00:15:49.955 "superblock": true, 00:15:49.955 "num_base_bdevs": 4, 00:15:49.955 "num_base_bdevs_discovered": 1, 00:15:49.955 "num_base_bdevs_operational": 3, 00:15:49.955 "base_bdevs_list": [ 00:15:49.955 { 00:15:49.955 "name": null, 00:15:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.955 "is_configured": false, 00:15:49.955 "data_offset": 2048, 00:15:49.955 "data_size": 63488 00:15:49.955 }, 00:15:49.955 { 00:15:49.955 "name": "pt2", 00:15:49.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.955 "is_configured": true, 00:15:49.955 "data_offset": 2048, 00:15:49.955 "data_size": 63488 00:15:49.955 }, 00:15:49.955 { 00:15:49.955 "name": null, 00:15:49.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.955 "is_configured": false, 00:15:49.955 "data_offset": 2048, 00:15:49.955 "data_size": 63488 00:15:49.955 }, 00:15:49.955 { 00:15:49.955 "name": null, 00:15:49.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.955 "is_configured": false, 00:15:49.955 "data_offset": 2048, 00:15:49.955 "data_size": 63488 00:15:49.955 } 00:15:49.955 ] 00:15:49.955 }' 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.955 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 [2024-11-20 03:22:39.864151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.522 [2024-11-20 03:22:39.864274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.522 [2024-11-20 03:22:39.864314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:50.522 [2024-11-20 03:22:39.864342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.522 [2024-11-20 03:22:39.864823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.522 [2024-11-20 03:22:39.864879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.522 [2024-11-20 03:22:39.864991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:50.522 [2024-11-20 03:22:39.865049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.522 pt3 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.522 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.523 "name": "raid_bdev1", 00:15:50.523 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:50.523 "strip_size_kb": 64, 00:15:50.523 "state": "configuring", 00:15:50.523 "raid_level": "raid5f", 00:15:50.523 "superblock": true, 00:15:50.523 "num_base_bdevs": 4, 00:15:50.523 "num_base_bdevs_discovered": 2, 00:15:50.523 "num_base_bdevs_operational": 3, 00:15:50.523 "base_bdevs_list": [ 00:15:50.523 { 00:15:50.523 "name": null, 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.523 "is_configured": false, 00:15:50.523 "data_offset": 2048, 00:15:50.523 "data_size": 63488 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "name": "pt2", 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.523 "is_configured": true, 00:15:50.523 "data_offset": 2048, 00:15:50.523 "data_size": 63488 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "name": "pt3", 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.523 "is_configured": true, 00:15:50.523 "data_offset": 2048, 00:15:50.523 "data_size": 63488 00:15:50.523 }, 00:15:50.523 { 00:15:50.523 "name": null, 00:15:50.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.523 "is_configured": false, 00:15:50.523 "data_offset": 2048, 00:15:50.523 "data_size": 63488 00:15:50.523 } 00:15:50.523 ] 00:15:50.523 }' 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.523 03:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 [2024-11-20 03:22:40.307413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:50.782 [2024-11-20 03:22:40.307484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.782 [2024-11-20 03:22:40.307508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:50.782 [2024-11-20 03:22:40.307518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.782 [2024-11-20 03:22:40.308045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.782 [2024-11-20 03:22:40.308120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:50.782 [2024-11-20 03:22:40.308240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:50.782 [2024-11-20 03:22:40.308295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:50.782 [2024-11-20 03:22:40.308461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.782 [2024-11-20 03:22:40.308500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:50.782 [2024-11-20 03:22:40.308791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:50.782 [2024-11-20 03:22:40.316663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.782 [2024-11-20 03:22:40.316688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:50.782 pt4 00:15:50.782 [2024-11-20 03:22:40.317030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.782 "name": "raid_bdev1", 00:15:50.782 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:50.782 "strip_size_kb": 64, 00:15:50.782 "state": "online", 00:15:50.782 "raid_level": "raid5f", 00:15:50.782 "superblock": true, 00:15:50.782 "num_base_bdevs": 4, 00:15:50.782 "num_base_bdevs_discovered": 3, 00:15:50.782 "num_base_bdevs_operational": 3, 00:15:50.782 "base_bdevs_list": [ 00:15:50.782 { 00:15:50.782 "name": null, 00:15:50.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.782 "is_configured": false, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "pt2", 00:15:50.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "pt3", 00:15:50.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "pt4", 00:15:50.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 } 00:15:50.782 ] 00:15:50.782 }' 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.782 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.351 [2024-11-20 03:22:40.762718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.351 [2024-11-20 03:22:40.762811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.351 [2024-11-20 03:22:40.762919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.351 [2024-11-20 03:22:40.763038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.351 [2024-11-20 03:22:40.763096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.351 [2024-11-20 03:22:40.838567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.351 [2024-11-20 03:22:40.838696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.351 [2024-11-20 03:22:40.838730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:51.351 [2024-11-20 03:22:40.838744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.351 [2024-11-20 03:22:40.841279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.351 [2024-11-20 03:22:40.841319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.351 [2024-11-20 03:22:40.841395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.351 [2024-11-20 03:22:40.841447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.351 [2024-11-20 03:22:40.841580] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.351 [2024-11-20 03:22:40.841592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.351 [2024-11-20 03:22:40.841608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:51.351 [2024-11-20 03:22:40.841680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.351 [2024-11-20 03:22:40.841804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.351 pt1 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.351 "name": "raid_bdev1", 00:15:51.351 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:51.351 "strip_size_kb": 64, 00:15:51.351 "state": "configuring", 00:15:51.351 "raid_level": "raid5f", 00:15:51.351 "superblock": true, 00:15:51.351 "num_base_bdevs": 4, 00:15:51.351 "num_base_bdevs_discovered": 2, 00:15:51.351 "num_base_bdevs_operational": 3, 00:15:51.351 "base_bdevs_list": [ 00:15:51.351 { 00:15:51.351 "name": null, 00:15:51.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.351 "is_configured": false, 00:15:51.351 "data_offset": 2048, 00:15:51.351 "data_size": 63488 00:15:51.351 }, 00:15:51.351 { 00:15:51.351 "name": "pt2", 00:15:51.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.351 "is_configured": true, 00:15:51.351 "data_offset": 2048, 00:15:51.351 "data_size": 63488 00:15:51.351 }, 00:15:51.351 { 00:15:51.351 "name": "pt3", 00:15:51.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.351 "is_configured": true, 00:15:51.351 "data_offset": 2048, 00:15:51.351 "data_size": 63488 00:15:51.351 }, 00:15:51.351 { 00:15:51.351 "name": null, 00:15:51.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.351 "is_configured": false, 00:15:51.351 "data_offset": 2048, 00:15:51.351 "data_size": 63488 00:15:51.351 } 00:15:51.351 ] 00:15:51.351 }' 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.351 03:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.921 [2024-11-20 03:22:41.353868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:51.921 [2024-11-20 03:22:41.354011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.921 [2024-11-20 03:22:41.354060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:51.921 [2024-11-20 03:22:41.354092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.921 [2024-11-20 03:22:41.354649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.921 [2024-11-20 03:22:41.354714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:51.921 [2024-11-20 03:22:41.354841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:51.921 [2024-11-20 03:22:41.354911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:51.921 [2024-11-20 03:22:41.355127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:51.921 [2024-11-20 03:22:41.355173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:51.921 [2024-11-20 03:22:41.355487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:51.921 [2024-11-20 03:22:41.363051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:51.921 [2024-11-20 03:22:41.363116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:51.921 [2024-11-20 03:22:41.363435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.921 pt4 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.921 "name": "raid_bdev1", 00:15:51.921 "uuid": "aeb6e57d-a815-4d4c-910b-d0caaef99893", 00:15:51.921 "strip_size_kb": 64, 00:15:51.921 "state": "online", 00:15:51.921 "raid_level": "raid5f", 00:15:51.921 "superblock": true, 00:15:51.921 "num_base_bdevs": 4, 00:15:51.921 "num_base_bdevs_discovered": 3, 00:15:51.921 "num_base_bdevs_operational": 3, 00:15:51.921 "base_bdevs_list": [ 00:15:51.921 { 00:15:51.921 "name": null, 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.921 "is_configured": false, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "pt2", 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "pt3", 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "pt4", 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 } 00:15:51.921 ] 00:15:51.921 }' 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.921 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.181 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:52.181 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.181 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.440 [2024-11-20 03:22:41.872260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' aeb6e57d-a815-4d4c-910b-d0caaef99893 '!=' aeb6e57d-a815-4d4c-910b-d0caaef99893 ']' 00:15:52.440 03:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83919 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83919 ']' 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83919 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83919 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.441 killing process with pid 83919 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83919' 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83919 00:15:52.441 [2024-11-20 03:22:41.944874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.441 [2024-11-20 03:22:41.944973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.441 03:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83919 00:15:52.441 [2024-11-20 03:22:41.945056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.441 [2024-11-20 03:22:41.945070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:53.009 [2024-11-20 03:22:42.340885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.947 03:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:53.947 00:15:53.947 real 0m8.470s 00:15:53.947 user 0m13.379s 00:15:53.947 sys 0m1.487s 00:15:53.947 03:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.947 ************************************ 00:15:53.947 END TEST raid5f_superblock_test 00:15:53.947 ************************************ 00:15:53.947 03:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.947 03:22:43 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:53.947 03:22:43 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:53.947 03:22:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.947 03:22:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.947 03:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.947 ************************************ 00:15:53.947 START TEST raid5f_rebuild_test 00:15:53.947 ************************************ 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84401 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84401 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84401 ']' 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.947 03:22:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.207 [2024-11-20 03:22:43.615389] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:54.207 [2024-11-20 03:22:43.615592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84401 ] 00:15:54.207 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.207 Zero copy mechanism will not be used. 00:15:54.207 [2024-11-20 03:22:43.789204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.467 [2024-11-20 03:22:43.904344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.726 [2024-11-20 03:22:44.104881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.726 [2024-11-20 03:22:44.105014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 BaseBdev1_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 [2024-11-20 03:22:44.489597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.986 [2024-11-20 03:22:44.489671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.986 [2024-11-20 03:22:44.489695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.986 [2024-11-20 03:22:44.489706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.986 [2024-11-20 03:22:44.491774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.986 [2024-11-20 03:22:44.491809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.986 BaseBdev1 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 BaseBdev2_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 [2024-11-20 03:22:44.544366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:54.986 [2024-11-20 03:22:44.544483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.986 [2024-11-20 03:22:44.544519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.986 [2024-11-20 03:22:44.544549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.986 [2024-11-20 03:22:44.546727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.986 [2024-11-20 03:22:44.546800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.986 BaseBdev2 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 BaseBdev3_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 [2024-11-20 03:22:44.608957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.986 [2024-11-20 03:22:44.609012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.986 [2024-11-20 03:22:44.609032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.986 [2024-11-20 03:22:44.609043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.986 [2024-11-20 03:22:44.611086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.986 [2024-11-20 03:22:44.611184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.986 BaseBdev3 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.986 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.246 BaseBdev4_malloc 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.246 [2024-11-20 03:22:44.664028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:55.246 [2024-11-20 03:22:44.664081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.246 [2024-11-20 03:22:44.664100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:55.246 [2024-11-20 03:22:44.664110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.246 [2024-11-20 03:22:44.666302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.246 [2024-11-20 03:22:44.666375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.246 BaseBdev4 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.246 spare_malloc 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.246 spare_delay 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.246 [2024-11-20 03:22:44.732959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.246 [2024-11-20 03:22:44.733074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.246 [2024-11-20 03:22:44.733112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:55.246 [2024-11-20 03:22:44.733145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.246 [2024-11-20 03:22:44.735208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.246 [2024-11-20 03:22:44.735284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.246 spare 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.246 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.247 [2024-11-20 03:22:44.744976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.247 [2024-11-20 03:22:44.746815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.247 [2024-11-20 03:22:44.746914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.247 [2024-11-20 03:22:44.746986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.247 [2024-11-20 03:22:44.747108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.247 [2024-11-20 03:22:44.747158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:55.247 [2024-11-20 03:22:44.747418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:55.247 [2024-11-20 03:22:44.754933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.247 [2024-11-20 03:22:44.754986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.247 [2024-11-20 03:22:44.755227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.247 "name": "raid_bdev1", 00:15:55.247 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:15:55.247 "strip_size_kb": 64, 00:15:55.247 "state": "online", 00:15:55.247 "raid_level": "raid5f", 00:15:55.247 "superblock": false, 00:15:55.247 "num_base_bdevs": 4, 00:15:55.247 "num_base_bdevs_discovered": 4, 00:15:55.247 "num_base_bdevs_operational": 4, 00:15:55.247 "base_bdevs_list": [ 00:15:55.247 { 00:15:55.247 "name": "BaseBdev1", 00:15:55.247 "uuid": "e4271478-5b1b-57f3-a198-a2c39c907950", 00:15:55.247 "is_configured": true, 00:15:55.247 "data_offset": 0, 00:15:55.247 "data_size": 65536 00:15:55.247 }, 00:15:55.247 { 00:15:55.247 "name": "BaseBdev2", 00:15:55.247 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:15:55.247 "is_configured": true, 00:15:55.247 "data_offset": 0, 00:15:55.247 "data_size": 65536 00:15:55.247 }, 00:15:55.247 { 00:15:55.247 "name": "BaseBdev3", 00:15:55.247 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:15:55.247 "is_configured": true, 00:15:55.247 "data_offset": 0, 00:15:55.247 "data_size": 65536 00:15:55.247 }, 00:15:55.247 { 00:15:55.247 "name": "BaseBdev4", 00:15:55.247 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:15:55.247 "is_configured": true, 00:15:55.247 "data_offset": 0, 00:15:55.247 "data_size": 65536 00:15:55.247 } 00:15:55.247 ] 00:15:55.247 }' 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.247 03:22:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.815 [2024-11-20 03:22:45.239338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.815 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.075 [2024-11-20 03:22:45.518710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:56.075 /dev/nbd0 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.075 1+0 records in 00:15:56.075 1+0 records out 00:15:56.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459825 s, 8.9 MB/s 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.075 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:56.076 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:56.076 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:56.076 03:22:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:56.645 512+0 records in 00:15:56.645 512+0 records out 00:15:56.645 100663296 bytes (101 MB, 96 MiB) copied, 0.485225 s, 207 MB/s 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.645 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.645 [2024-11-20 03:22:46.276435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.907 [2024-11-20 03:22:46.311903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.907 "name": "raid_bdev1", 00:15:56.907 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:15:56.907 "strip_size_kb": 64, 00:15:56.907 "state": "online", 00:15:56.907 "raid_level": "raid5f", 00:15:56.907 "superblock": false, 00:15:56.907 "num_base_bdevs": 4, 00:15:56.907 "num_base_bdevs_discovered": 3, 00:15:56.907 "num_base_bdevs_operational": 3, 00:15:56.907 "base_bdevs_list": [ 00:15:56.907 { 00:15:56.907 "name": null, 00:15:56.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.907 "is_configured": false, 00:15:56.907 "data_offset": 0, 00:15:56.907 "data_size": 65536 00:15:56.907 }, 00:15:56.907 { 00:15:56.907 "name": "BaseBdev2", 00:15:56.907 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:15:56.907 "is_configured": true, 00:15:56.907 "data_offset": 0, 00:15:56.907 "data_size": 65536 00:15:56.907 }, 00:15:56.907 { 00:15:56.907 "name": "BaseBdev3", 00:15:56.907 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:15:56.907 "is_configured": true, 00:15:56.907 "data_offset": 0, 00:15:56.907 "data_size": 65536 00:15:56.907 }, 00:15:56.907 { 00:15:56.907 "name": "BaseBdev4", 00:15:56.907 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:15:56.907 "is_configured": true, 00:15:56.907 "data_offset": 0, 00:15:56.907 "data_size": 65536 00:15:56.907 } 00:15:56.907 ] 00:15:56.907 }' 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.907 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.172 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.172 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.172 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.172 [2024-11-20 03:22:46.791078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.432 [2024-11-20 03:22:46.808487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:57.432 03:22:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.432 03:22:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.432 [2024-11-20 03:22:46.818305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.370 "name": "raid_bdev1", 00:15:58.370 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:15:58.370 "strip_size_kb": 64, 00:15:58.370 "state": "online", 00:15:58.370 "raid_level": "raid5f", 00:15:58.370 "superblock": false, 00:15:58.370 "num_base_bdevs": 4, 00:15:58.370 "num_base_bdevs_discovered": 4, 00:15:58.370 "num_base_bdevs_operational": 4, 00:15:58.370 "process": { 00:15:58.370 "type": "rebuild", 00:15:58.370 "target": "spare", 00:15:58.370 "progress": { 00:15:58.370 "blocks": 19200, 00:15:58.370 "percent": 9 00:15:58.370 } 00:15:58.370 }, 00:15:58.370 "base_bdevs_list": [ 00:15:58.370 { 00:15:58.370 "name": "spare", 00:15:58.370 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:15:58.370 "is_configured": true, 00:15:58.370 "data_offset": 0, 00:15:58.370 "data_size": 65536 00:15:58.370 }, 00:15:58.370 { 00:15:58.370 "name": "BaseBdev2", 00:15:58.370 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:15:58.370 "is_configured": true, 00:15:58.370 "data_offset": 0, 00:15:58.370 "data_size": 65536 00:15:58.370 }, 00:15:58.370 { 00:15:58.370 "name": "BaseBdev3", 00:15:58.370 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:15:58.370 "is_configured": true, 00:15:58.370 "data_offset": 0, 00:15:58.370 "data_size": 65536 00:15:58.370 }, 00:15:58.370 { 00:15:58.370 "name": "BaseBdev4", 00:15:58.370 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:15:58.370 "is_configured": true, 00:15:58.370 "data_offset": 0, 00:15:58.370 "data_size": 65536 00:15:58.370 } 00:15:58.370 ] 00:15:58.370 }' 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.370 03:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 [2024-11-20 03:22:47.973392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.631 [2024-11-20 03:22:48.025438] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.631 [2024-11-20 03:22:48.025506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.631 [2024-11-20 03:22:48.025523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.631 [2024-11-20 03:22:48.025532] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.631 "name": "raid_bdev1", 00:15:58.631 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:15:58.631 "strip_size_kb": 64, 00:15:58.631 "state": "online", 00:15:58.631 "raid_level": "raid5f", 00:15:58.631 "superblock": false, 00:15:58.631 "num_base_bdevs": 4, 00:15:58.631 "num_base_bdevs_discovered": 3, 00:15:58.631 "num_base_bdevs_operational": 3, 00:15:58.631 "base_bdevs_list": [ 00:15:58.631 { 00:15:58.631 "name": null, 00:15:58.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.631 "is_configured": false, 00:15:58.631 "data_offset": 0, 00:15:58.631 "data_size": 65536 00:15:58.631 }, 00:15:58.631 { 00:15:58.631 "name": "BaseBdev2", 00:15:58.631 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:15:58.631 "is_configured": true, 00:15:58.631 "data_offset": 0, 00:15:58.631 "data_size": 65536 00:15:58.631 }, 00:15:58.631 { 00:15:58.631 "name": "BaseBdev3", 00:15:58.631 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:15:58.631 "is_configured": true, 00:15:58.631 "data_offset": 0, 00:15:58.631 "data_size": 65536 00:15:58.631 }, 00:15:58.631 { 00:15:58.631 "name": "BaseBdev4", 00:15:58.631 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:15:58.631 "is_configured": true, 00:15:58.631 "data_offset": 0, 00:15:58.631 "data_size": 65536 00:15:58.631 } 00:15:58.631 ] 00:15:58.631 }' 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.631 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.890 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.150 "name": "raid_bdev1", 00:15:59.150 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:15:59.150 "strip_size_kb": 64, 00:15:59.150 "state": "online", 00:15:59.150 "raid_level": "raid5f", 00:15:59.150 "superblock": false, 00:15:59.150 "num_base_bdevs": 4, 00:15:59.150 "num_base_bdevs_discovered": 3, 00:15:59.150 "num_base_bdevs_operational": 3, 00:15:59.150 "base_bdevs_list": [ 00:15:59.150 { 00:15:59.150 "name": null, 00:15:59.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.150 "is_configured": false, 00:15:59.150 "data_offset": 0, 00:15:59.150 "data_size": 65536 00:15:59.150 }, 00:15:59.150 { 00:15:59.150 "name": "BaseBdev2", 00:15:59.150 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:15:59.150 "is_configured": true, 00:15:59.150 "data_offset": 0, 00:15:59.150 "data_size": 65536 00:15:59.150 }, 00:15:59.150 { 00:15:59.150 "name": "BaseBdev3", 00:15:59.150 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:15:59.150 "is_configured": true, 00:15:59.150 "data_offset": 0, 00:15:59.150 "data_size": 65536 00:15:59.150 }, 00:15:59.150 { 00:15:59.150 "name": "BaseBdev4", 00:15:59.150 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:15:59.150 "is_configured": true, 00:15:59.150 "data_offset": 0, 00:15:59.150 "data_size": 65536 00:15:59.150 } 00:15:59.150 ] 00:15:59.150 }' 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.150 [2024-11-20 03:22:48.632162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.150 [2024-11-20 03:22:48.648236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.150 03:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.150 [2024-11-20 03:22:48.657545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.087 "name": "raid_bdev1", 00:16:00.087 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:00.087 "strip_size_kb": 64, 00:16:00.087 "state": "online", 00:16:00.087 "raid_level": "raid5f", 00:16:00.087 "superblock": false, 00:16:00.087 "num_base_bdevs": 4, 00:16:00.087 "num_base_bdevs_discovered": 4, 00:16:00.087 "num_base_bdevs_operational": 4, 00:16:00.087 "process": { 00:16:00.087 "type": "rebuild", 00:16:00.087 "target": "spare", 00:16:00.087 "progress": { 00:16:00.087 "blocks": 19200, 00:16:00.087 "percent": 9 00:16:00.087 } 00:16:00.087 }, 00:16:00.087 "base_bdevs_list": [ 00:16:00.087 { 00:16:00.087 "name": "spare", 00:16:00.087 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 0, 00:16:00.087 "data_size": 65536 00:16:00.087 }, 00:16:00.087 { 00:16:00.087 "name": "BaseBdev2", 00:16:00.087 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 0, 00:16:00.087 "data_size": 65536 00:16:00.087 }, 00:16:00.087 { 00:16:00.087 "name": "BaseBdev3", 00:16:00.087 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 0, 00:16:00.087 "data_size": 65536 00:16:00.087 }, 00:16:00.087 { 00:16:00.087 "name": "BaseBdev4", 00:16:00.087 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 0, 00:16:00.087 "data_size": 65536 00:16:00.087 } 00:16:00.087 ] 00:16:00.087 }' 00:16:00.087 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=613 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.347 "name": "raid_bdev1", 00:16:00.347 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:00.347 "strip_size_kb": 64, 00:16:00.347 "state": "online", 00:16:00.347 "raid_level": "raid5f", 00:16:00.347 "superblock": false, 00:16:00.347 "num_base_bdevs": 4, 00:16:00.347 "num_base_bdevs_discovered": 4, 00:16:00.347 "num_base_bdevs_operational": 4, 00:16:00.347 "process": { 00:16:00.347 "type": "rebuild", 00:16:00.347 "target": "spare", 00:16:00.347 "progress": { 00:16:00.347 "blocks": 21120, 00:16:00.347 "percent": 10 00:16:00.347 } 00:16:00.347 }, 00:16:00.347 "base_bdevs_list": [ 00:16:00.347 { 00:16:00.347 "name": "spare", 00:16:00.347 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:00.347 "is_configured": true, 00:16:00.347 "data_offset": 0, 00:16:00.347 "data_size": 65536 00:16:00.347 }, 00:16:00.347 { 00:16:00.347 "name": "BaseBdev2", 00:16:00.347 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:00.347 "is_configured": true, 00:16:00.347 "data_offset": 0, 00:16:00.347 "data_size": 65536 00:16:00.347 }, 00:16:00.347 { 00:16:00.347 "name": "BaseBdev3", 00:16:00.347 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:00.347 "is_configured": true, 00:16:00.347 "data_offset": 0, 00:16:00.347 "data_size": 65536 00:16:00.347 }, 00:16:00.347 { 00:16:00.347 "name": "BaseBdev4", 00:16:00.347 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:00.347 "is_configured": true, 00:16:00.347 "data_offset": 0, 00:16:00.347 "data_size": 65536 00:16:00.347 } 00:16:00.347 ] 00:16:00.347 }' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.347 03:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.287 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.287 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.287 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.287 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.287 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.287 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.546 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.546 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.546 03:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.546 03:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.546 03:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.546 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.546 "name": "raid_bdev1", 00:16:01.546 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:01.546 "strip_size_kb": 64, 00:16:01.546 "state": "online", 00:16:01.546 "raid_level": "raid5f", 00:16:01.546 "superblock": false, 00:16:01.546 "num_base_bdevs": 4, 00:16:01.546 "num_base_bdevs_discovered": 4, 00:16:01.546 "num_base_bdevs_operational": 4, 00:16:01.546 "process": { 00:16:01.546 "type": "rebuild", 00:16:01.546 "target": "spare", 00:16:01.546 "progress": { 00:16:01.546 "blocks": 42240, 00:16:01.546 "percent": 21 00:16:01.546 } 00:16:01.546 }, 00:16:01.546 "base_bdevs_list": [ 00:16:01.546 { 00:16:01.546 "name": "spare", 00:16:01.546 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:01.546 "is_configured": true, 00:16:01.546 "data_offset": 0, 00:16:01.546 "data_size": 65536 00:16:01.546 }, 00:16:01.546 { 00:16:01.546 "name": "BaseBdev2", 00:16:01.546 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:01.546 "is_configured": true, 00:16:01.546 "data_offset": 0, 00:16:01.546 "data_size": 65536 00:16:01.546 }, 00:16:01.546 { 00:16:01.546 "name": "BaseBdev3", 00:16:01.546 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:01.546 "is_configured": true, 00:16:01.546 "data_offset": 0, 00:16:01.546 "data_size": 65536 00:16:01.546 }, 00:16:01.546 { 00:16:01.546 "name": "BaseBdev4", 00:16:01.546 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:01.547 "is_configured": true, 00:16:01.547 "data_offset": 0, 00:16:01.547 "data_size": 65536 00:16:01.547 } 00:16:01.547 ] 00:16:01.547 }' 00:16:01.547 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.547 03:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.547 03:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.547 03:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.547 03:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.484 "name": "raid_bdev1", 00:16:02.484 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:02.484 "strip_size_kb": 64, 00:16:02.484 "state": "online", 00:16:02.484 "raid_level": "raid5f", 00:16:02.484 "superblock": false, 00:16:02.484 "num_base_bdevs": 4, 00:16:02.484 "num_base_bdevs_discovered": 4, 00:16:02.484 "num_base_bdevs_operational": 4, 00:16:02.484 "process": { 00:16:02.484 "type": "rebuild", 00:16:02.484 "target": "spare", 00:16:02.484 "progress": { 00:16:02.484 "blocks": 63360, 00:16:02.484 "percent": 32 00:16:02.484 } 00:16:02.484 }, 00:16:02.484 "base_bdevs_list": [ 00:16:02.484 { 00:16:02.484 "name": "spare", 00:16:02.484 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:02.484 "is_configured": true, 00:16:02.484 "data_offset": 0, 00:16:02.484 "data_size": 65536 00:16:02.484 }, 00:16:02.484 { 00:16:02.484 "name": "BaseBdev2", 00:16:02.484 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:02.484 "is_configured": true, 00:16:02.484 "data_offset": 0, 00:16:02.484 "data_size": 65536 00:16:02.484 }, 00:16:02.484 { 00:16:02.484 "name": "BaseBdev3", 00:16:02.484 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:02.484 "is_configured": true, 00:16:02.484 "data_offset": 0, 00:16:02.484 "data_size": 65536 00:16:02.484 }, 00:16:02.484 { 00:16:02.484 "name": "BaseBdev4", 00:16:02.484 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:02.484 "is_configured": true, 00:16:02.484 "data_offset": 0, 00:16:02.484 "data_size": 65536 00:16:02.484 } 00:16:02.484 ] 00:16:02.484 }' 00:16:02.484 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.743 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.743 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.743 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.743 03:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.683 "name": "raid_bdev1", 00:16:03.683 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:03.683 "strip_size_kb": 64, 00:16:03.683 "state": "online", 00:16:03.683 "raid_level": "raid5f", 00:16:03.683 "superblock": false, 00:16:03.683 "num_base_bdevs": 4, 00:16:03.683 "num_base_bdevs_discovered": 4, 00:16:03.683 "num_base_bdevs_operational": 4, 00:16:03.683 "process": { 00:16:03.683 "type": "rebuild", 00:16:03.683 "target": "spare", 00:16:03.683 "progress": { 00:16:03.683 "blocks": 86400, 00:16:03.683 "percent": 43 00:16:03.683 } 00:16:03.683 }, 00:16:03.683 "base_bdevs_list": [ 00:16:03.683 { 00:16:03.683 "name": "spare", 00:16:03.683 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:03.683 "is_configured": true, 00:16:03.683 "data_offset": 0, 00:16:03.683 "data_size": 65536 00:16:03.683 }, 00:16:03.683 { 00:16:03.683 "name": "BaseBdev2", 00:16:03.683 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:03.683 "is_configured": true, 00:16:03.683 "data_offset": 0, 00:16:03.683 "data_size": 65536 00:16:03.683 }, 00:16:03.683 { 00:16:03.683 "name": "BaseBdev3", 00:16:03.683 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:03.683 "is_configured": true, 00:16:03.683 "data_offset": 0, 00:16:03.683 "data_size": 65536 00:16:03.683 }, 00:16:03.683 { 00:16:03.683 "name": "BaseBdev4", 00:16:03.683 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:03.683 "is_configured": true, 00:16:03.683 "data_offset": 0, 00:16:03.683 "data_size": 65536 00:16:03.683 } 00:16:03.683 ] 00:16:03.683 }' 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.683 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.942 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.942 03:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.882 "name": "raid_bdev1", 00:16:04.882 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:04.882 "strip_size_kb": 64, 00:16:04.882 "state": "online", 00:16:04.882 "raid_level": "raid5f", 00:16:04.882 "superblock": false, 00:16:04.882 "num_base_bdevs": 4, 00:16:04.882 "num_base_bdevs_discovered": 4, 00:16:04.882 "num_base_bdevs_operational": 4, 00:16:04.882 "process": { 00:16:04.882 "type": "rebuild", 00:16:04.882 "target": "spare", 00:16:04.882 "progress": { 00:16:04.882 "blocks": 107520, 00:16:04.882 "percent": 54 00:16:04.882 } 00:16:04.882 }, 00:16:04.882 "base_bdevs_list": [ 00:16:04.882 { 00:16:04.882 "name": "spare", 00:16:04.882 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:04.882 "is_configured": true, 00:16:04.882 "data_offset": 0, 00:16:04.882 "data_size": 65536 00:16:04.882 }, 00:16:04.882 { 00:16:04.882 "name": "BaseBdev2", 00:16:04.882 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:04.882 "is_configured": true, 00:16:04.882 "data_offset": 0, 00:16:04.882 "data_size": 65536 00:16:04.882 }, 00:16:04.882 { 00:16:04.882 "name": "BaseBdev3", 00:16:04.882 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:04.882 "is_configured": true, 00:16:04.882 "data_offset": 0, 00:16:04.882 "data_size": 65536 00:16:04.882 }, 00:16:04.882 { 00:16:04.882 "name": "BaseBdev4", 00:16:04.882 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:04.882 "is_configured": true, 00:16:04.882 "data_offset": 0, 00:16:04.882 "data_size": 65536 00:16:04.882 } 00:16:04.882 ] 00:16:04.882 }' 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.882 03:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.262 "name": "raid_bdev1", 00:16:06.262 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:06.262 "strip_size_kb": 64, 00:16:06.262 "state": "online", 00:16:06.262 "raid_level": "raid5f", 00:16:06.262 "superblock": false, 00:16:06.262 "num_base_bdevs": 4, 00:16:06.262 "num_base_bdevs_discovered": 4, 00:16:06.262 "num_base_bdevs_operational": 4, 00:16:06.262 "process": { 00:16:06.262 "type": "rebuild", 00:16:06.262 "target": "spare", 00:16:06.262 "progress": { 00:16:06.262 "blocks": 130560, 00:16:06.262 "percent": 66 00:16:06.262 } 00:16:06.262 }, 00:16:06.262 "base_bdevs_list": [ 00:16:06.262 { 00:16:06.262 "name": "spare", 00:16:06.262 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:06.262 "is_configured": true, 00:16:06.262 "data_offset": 0, 00:16:06.262 "data_size": 65536 00:16:06.262 }, 00:16:06.262 { 00:16:06.262 "name": "BaseBdev2", 00:16:06.262 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:06.262 "is_configured": true, 00:16:06.262 "data_offset": 0, 00:16:06.262 "data_size": 65536 00:16:06.262 }, 00:16:06.262 { 00:16:06.262 "name": "BaseBdev3", 00:16:06.262 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:06.262 "is_configured": true, 00:16:06.262 "data_offset": 0, 00:16:06.262 "data_size": 65536 00:16:06.262 }, 00:16:06.262 { 00:16:06.262 "name": "BaseBdev4", 00:16:06.262 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:06.262 "is_configured": true, 00:16:06.262 "data_offset": 0, 00:16:06.262 "data_size": 65536 00:16:06.262 } 00:16:06.262 ] 00:16:06.262 }' 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.262 03:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.201 "name": "raid_bdev1", 00:16:07.201 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:07.201 "strip_size_kb": 64, 00:16:07.201 "state": "online", 00:16:07.201 "raid_level": "raid5f", 00:16:07.201 "superblock": false, 00:16:07.201 "num_base_bdevs": 4, 00:16:07.201 "num_base_bdevs_discovered": 4, 00:16:07.201 "num_base_bdevs_operational": 4, 00:16:07.201 "process": { 00:16:07.201 "type": "rebuild", 00:16:07.201 "target": "spare", 00:16:07.201 "progress": { 00:16:07.201 "blocks": 151680, 00:16:07.201 "percent": 77 00:16:07.201 } 00:16:07.201 }, 00:16:07.201 "base_bdevs_list": [ 00:16:07.201 { 00:16:07.201 "name": "spare", 00:16:07.201 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:07.201 "is_configured": true, 00:16:07.201 "data_offset": 0, 00:16:07.201 "data_size": 65536 00:16:07.201 }, 00:16:07.201 { 00:16:07.201 "name": "BaseBdev2", 00:16:07.201 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:07.201 "is_configured": true, 00:16:07.201 "data_offset": 0, 00:16:07.201 "data_size": 65536 00:16:07.201 }, 00:16:07.201 { 00:16:07.201 "name": "BaseBdev3", 00:16:07.201 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:07.201 "is_configured": true, 00:16:07.201 "data_offset": 0, 00:16:07.201 "data_size": 65536 00:16:07.201 }, 00:16:07.201 { 00:16:07.201 "name": "BaseBdev4", 00:16:07.201 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:07.201 "is_configured": true, 00:16:07.201 "data_offset": 0, 00:16:07.201 "data_size": 65536 00:16:07.201 } 00:16:07.201 ] 00:16:07.201 }' 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.201 03:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.139 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.140 03:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 03:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.399 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.399 "name": "raid_bdev1", 00:16:08.399 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:08.399 "strip_size_kb": 64, 00:16:08.399 "state": "online", 00:16:08.399 "raid_level": "raid5f", 00:16:08.399 "superblock": false, 00:16:08.399 "num_base_bdevs": 4, 00:16:08.399 "num_base_bdevs_discovered": 4, 00:16:08.399 "num_base_bdevs_operational": 4, 00:16:08.399 "process": { 00:16:08.399 "type": "rebuild", 00:16:08.399 "target": "spare", 00:16:08.399 "progress": { 00:16:08.399 "blocks": 172800, 00:16:08.399 "percent": 87 00:16:08.399 } 00:16:08.399 }, 00:16:08.399 "base_bdevs_list": [ 00:16:08.399 { 00:16:08.399 "name": "spare", 00:16:08.399 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 0, 00:16:08.399 "data_size": 65536 00:16:08.399 }, 00:16:08.399 { 00:16:08.399 "name": "BaseBdev2", 00:16:08.399 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 0, 00:16:08.399 "data_size": 65536 00:16:08.399 }, 00:16:08.399 { 00:16:08.399 "name": "BaseBdev3", 00:16:08.399 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 0, 00:16:08.399 "data_size": 65536 00:16:08.399 }, 00:16:08.399 { 00:16:08.399 "name": "BaseBdev4", 00:16:08.399 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:08.399 "is_configured": true, 00:16:08.399 "data_offset": 0, 00:16:08.399 "data_size": 65536 00:16:08.400 } 00:16:08.400 ] 00:16:08.400 }' 00:16:08.400 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.400 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.400 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.400 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.400 03:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.339 "name": "raid_bdev1", 00:16:09.339 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:09.339 "strip_size_kb": 64, 00:16:09.339 "state": "online", 00:16:09.339 "raid_level": "raid5f", 00:16:09.339 "superblock": false, 00:16:09.339 "num_base_bdevs": 4, 00:16:09.339 "num_base_bdevs_discovered": 4, 00:16:09.339 "num_base_bdevs_operational": 4, 00:16:09.339 "process": { 00:16:09.339 "type": "rebuild", 00:16:09.339 "target": "spare", 00:16:09.339 "progress": { 00:16:09.339 "blocks": 195840, 00:16:09.339 "percent": 99 00:16:09.339 } 00:16:09.339 }, 00:16:09.339 "base_bdevs_list": [ 00:16:09.339 { 00:16:09.339 "name": "spare", 00:16:09.339 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:09.339 "is_configured": true, 00:16:09.339 "data_offset": 0, 00:16:09.339 "data_size": 65536 00:16:09.339 }, 00:16:09.339 { 00:16:09.339 "name": "BaseBdev2", 00:16:09.339 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:09.339 "is_configured": true, 00:16:09.339 "data_offset": 0, 00:16:09.339 "data_size": 65536 00:16:09.339 }, 00:16:09.339 { 00:16:09.339 "name": "BaseBdev3", 00:16:09.339 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:09.339 "is_configured": true, 00:16:09.339 "data_offset": 0, 00:16:09.339 "data_size": 65536 00:16:09.339 }, 00:16:09.339 { 00:16:09.339 "name": "BaseBdev4", 00:16:09.339 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:09.339 "is_configured": true, 00:16:09.339 "data_offset": 0, 00:16:09.339 "data_size": 65536 00:16:09.339 } 00:16:09.339 ] 00:16:09.339 }' 00:16:09.339 03:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.599 [2024-11-20 03:22:59.012665] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:09.599 [2024-11-20 03:22:59.012737] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:09.599 [2024-11-20 03:22:59.012779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.599 03:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.599 03:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.599 03:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.599 03:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.536 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.536 "name": "raid_bdev1", 00:16:10.536 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:10.536 "strip_size_kb": 64, 00:16:10.536 "state": "online", 00:16:10.536 "raid_level": "raid5f", 00:16:10.536 "superblock": false, 00:16:10.536 "num_base_bdevs": 4, 00:16:10.537 "num_base_bdevs_discovered": 4, 00:16:10.537 "num_base_bdevs_operational": 4, 00:16:10.537 "base_bdevs_list": [ 00:16:10.537 { 00:16:10.537 "name": "spare", 00:16:10.537 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:10.537 "is_configured": true, 00:16:10.537 "data_offset": 0, 00:16:10.537 "data_size": 65536 00:16:10.537 }, 00:16:10.537 { 00:16:10.537 "name": "BaseBdev2", 00:16:10.537 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:10.537 "is_configured": true, 00:16:10.537 "data_offset": 0, 00:16:10.537 "data_size": 65536 00:16:10.537 }, 00:16:10.537 { 00:16:10.537 "name": "BaseBdev3", 00:16:10.537 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:10.537 "is_configured": true, 00:16:10.537 "data_offset": 0, 00:16:10.537 "data_size": 65536 00:16:10.537 }, 00:16:10.537 { 00:16:10.537 "name": "BaseBdev4", 00:16:10.537 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:10.537 "is_configured": true, 00:16:10.537 "data_offset": 0, 00:16:10.537 "data_size": 65536 00:16:10.537 } 00:16:10.537 ] 00:16:10.537 }' 00:16:10.537 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.796 "name": "raid_bdev1", 00:16:10.796 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:10.796 "strip_size_kb": 64, 00:16:10.796 "state": "online", 00:16:10.796 "raid_level": "raid5f", 00:16:10.796 "superblock": false, 00:16:10.796 "num_base_bdevs": 4, 00:16:10.796 "num_base_bdevs_discovered": 4, 00:16:10.796 "num_base_bdevs_operational": 4, 00:16:10.796 "base_bdevs_list": [ 00:16:10.796 { 00:16:10.796 "name": "spare", 00:16:10.796 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 }, 00:16:10.796 { 00:16:10.796 "name": "BaseBdev2", 00:16:10.796 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 }, 00:16:10.796 { 00:16:10.796 "name": "BaseBdev3", 00:16:10.796 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 }, 00:16:10.796 { 00:16:10.796 "name": "BaseBdev4", 00:16:10.796 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 } 00:16:10.796 ] 00:16:10.796 }' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.796 "name": "raid_bdev1", 00:16:10.796 "uuid": "2f59d47a-81e0-40f0-b24b-bc2a4620cee4", 00:16:10.796 "strip_size_kb": 64, 00:16:10.796 "state": "online", 00:16:10.796 "raid_level": "raid5f", 00:16:10.796 "superblock": false, 00:16:10.796 "num_base_bdevs": 4, 00:16:10.796 "num_base_bdevs_discovered": 4, 00:16:10.796 "num_base_bdevs_operational": 4, 00:16:10.796 "base_bdevs_list": [ 00:16:10.796 { 00:16:10.796 "name": "spare", 00:16:10.796 "uuid": "9570104e-f4c8-513b-bb3b-3d394c5f4c98", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 }, 00:16:10.796 { 00:16:10.796 "name": "BaseBdev2", 00:16:10.796 "uuid": "aeee9885-0ecc-58df-acee-6934f894d9c4", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 }, 00:16:10.796 { 00:16:10.796 "name": "BaseBdev3", 00:16:10.796 "uuid": "0400e490-2d94-5d47-8163-39769de226e9", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 }, 00:16:10.796 { 00:16:10.796 "name": "BaseBdev4", 00:16:10.796 "uuid": "eac276cd-5ec4-5a42-828e-1db9a438bf7f", 00:16:10.796 "is_configured": true, 00:16:10.796 "data_offset": 0, 00:16:10.796 "data_size": 65536 00:16:10.796 } 00:16:10.796 ] 00:16:10.796 }' 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.796 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.364 [2024-11-20 03:23:00.786170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.364 [2024-11-20 03:23:00.786208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.364 [2024-11-20 03:23:00.786320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.364 [2024-11-20 03:23:00.786442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.364 [2024-11-20 03:23:00.786455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.364 03:23:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:11.623 /dev/nbd0 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.623 1+0 records in 00:16:11.623 1+0 records out 00:16:11.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383731 s, 10.7 MB/s 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.623 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:11.882 /dev/nbd1 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.882 1+0 records in 00:16:11.882 1+0 records out 00:16:11.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287005 s, 14.3 MB/s 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:11.882 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.883 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.883 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.142 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:12.401 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84401 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84401 ']' 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84401 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.402 03:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84401 00:16:12.402 03:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.402 03:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.402 killing process with pid 84401 00:16:12.402 03:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84401' 00:16:12.402 03:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84401 00:16:12.402 Received shutdown signal, test time was about 60.000000 seconds 00:16:12.402 00:16:12.402 Latency(us) 00:16:12.402 [2024-11-20T03:23:02.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.402 [2024-11-20T03:23:02.037Z] =================================================================================================================== 00:16:12.402 [2024-11-20T03:23:02.037Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.402 [2024-11-20 03:23:02.009238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.402 03:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84401 00:16:12.971 [2024-11-20 03:23:02.495742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.350 00:16:14.350 real 0m20.059s 00:16:14.350 user 0m23.982s 00:16:14.350 sys 0m2.257s 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.350 ************************************ 00:16:14.350 END TEST raid5f_rebuild_test 00:16:14.350 ************************************ 00:16:14.350 03:23:03 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:14.350 03:23:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:14.350 03:23:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.350 03:23:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.350 ************************************ 00:16:14.350 START TEST raid5f_rebuild_test_sb 00:16:14.350 ************************************ 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:14.350 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84921 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84921 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84921 ']' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.351 03:23:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.351 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.351 Zero copy mechanism will not be used. 00:16:14.351 [2024-11-20 03:23:03.744634] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:14.351 [2024-11-20 03:23:03.744749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84921 ] 00:16:14.351 [2024-11-20 03:23:03.916875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.610 [2024-11-20 03:23:04.028060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.610 [2024-11-20 03:23:04.229346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.610 [2024-11-20 03:23:04.229380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.180 BaseBdev1_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.180 [2024-11-20 03:23:04.642484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.180 [2024-11-20 03:23:04.642551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.180 [2024-11-20 03:23:04.642591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.180 [2024-11-20 03:23:04.642602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.180 [2024-11-20 03:23:04.644715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.180 [2024-11-20 03:23:04.644752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.180 BaseBdev1 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.180 BaseBdev2_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.180 [2024-11-20 03:23:04.700489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:15.180 [2024-11-20 03:23:04.700548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.180 [2024-11-20 03:23:04.700565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.180 [2024-11-20 03:23:04.700577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.180 [2024-11-20 03:23:04.702666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.180 [2024-11-20 03:23:04.702698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.180 BaseBdev2 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.180 BaseBdev3_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.180 [2024-11-20 03:23:04.765837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:15.180 [2024-11-20 03:23:04.765890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.180 [2024-11-20 03:23:04.765909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:15.180 [2024-11-20 03:23:04.765919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.180 [2024-11-20 03:23:04.767960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.180 [2024-11-20 03:23:04.767997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:15.180 BaseBdev3 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.180 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 BaseBdev4_malloc 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 [2024-11-20 03:23:04.822002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:15.439 [2024-11-20 03:23:04.822060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.439 [2024-11-20 03:23:04.822079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:15.439 [2024-11-20 03:23:04.822089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.439 [2024-11-20 03:23:04.824319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.439 [2024-11-20 03:23:04.824358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:15.439 BaseBdev4 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 spare_malloc 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 spare_delay 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 [2024-11-20 03:23:04.885734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.439 [2024-11-20 03:23:04.885792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.439 [2024-11-20 03:23:04.885812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:15.439 [2024-11-20 03:23:04.885823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.439 [2024-11-20 03:23:04.887874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.439 [2024-11-20 03:23:04.887910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.439 spare 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 [2024-11-20 03:23:04.897769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.439 [2024-11-20 03:23:04.899556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.439 [2024-11-20 03:23:04.899632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.439 [2024-11-20 03:23:04.899683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.439 [2024-11-20 03:23:04.899906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.439 [2024-11-20 03:23:04.899930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.439 [2024-11-20 03:23:04.900166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:15.439 [2024-11-20 03:23:04.907578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.439 [2024-11-20 03:23:04.907600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:15.439 [2024-11-20 03:23:04.907791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.439 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.440 "name": "raid_bdev1", 00:16:15.440 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:15.440 "strip_size_kb": 64, 00:16:15.440 "state": "online", 00:16:15.440 "raid_level": "raid5f", 00:16:15.440 "superblock": true, 00:16:15.440 "num_base_bdevs": 4, 00:16:15.440 "num_base_bdevs_discovered": 4, 00:16:15.440 "num_base_bdevs_operational": 4, 00:16:15.440 "base_bdevs_list": [ 00:16:15.440 { 00:16:15.440 "name": "BaseBdev1", 00:16:15.440 "uuid": "32c529dd-8e3b-51ab-bf16-97001f8aa10e", 00:16:15.440 "is_configured": true, 00:16:15.440 "data_offset": 2048, 00:16:15.440 "data_size": 63488 00:16:15.440 }, 00:16:15.440 { 00:16:15.440 "name": "BaseBdev2", 00:16:15.440 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:15.440 "is_configured": true, 00:16:15.440 "data_offset": 2048, 00:16:15.440 "data_size": 63488 00:16:15.440 }, 00:16:15.440 { 00:16:15.440 "name": "BaseBdev3", 00:16:15.440 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:15.440 "is_configured": true, 00:16:15.440 "data_offset": 2048, 00:16:15.440 "data_size": 63488 00:16:15.440 }, 00:16:15.440 { 00:16:15.440 "name": "BaseBdev4", 00:16:15.440 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:15.440 "is_configured": true, 00:16:15.440 "data_offset": 2048, 00:16:15.440 "data_size": 63488 00:16:15.440 } 00:16:15.440 ] 00:16:15.440 }' 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.440 03:23:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.704 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.704 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:15.704 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.704 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.705 [2024-11-20 03:23:05.315733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.971 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:15.971 [2024-11-20 03:23:05.595012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:16.231 /dev/nbd0 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.231 1+0 records in 00:16:16.231 1+0 records out 00:16:16.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351001 s, 11.7 MB/s 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:16.231 03:23:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:16.490 496+0 records in 00:16:16.490 496+0 records out 00:16:16.490 97517568 bytes (98 MB, 93 MiB) copied, 0.458251 s, 213 MB/s 00:16:16.490 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.750 [2024-11-20 03:23:06.336454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.750 [2024-11-20 03:23:06.351705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.750 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.011 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.011 "name": "raid_bdev1", 00:16:17.011 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:17.011 "strip_size_kb": 64, 00:16:17.011 "state": "online", 00:16:17.011 "raid_level": "raid5f", 00:16:17.011 "superblock": true, 00:16:17.011 "num_base_bdevs": 4, 00:16:17.011 "num_base_bdevs_discovered": 3, 00:16:17.011 "num_base_bdevs_operational": 3, 00:16:17.011 "base_bdevs_list": [ 00:16:17.011 { 00:16:17.011 "name": null, 00:16:17.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.011 "is_configured": false, 00:16:17.011 "data_offset": 0, 00:16:17.011 "data_size": 63488 00:16:17.011 }, 00:16:17.011 { 00:16:17.011 "name": "BaseBdev2", 00:16:17.011 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:17.011 "is_configured": true, 00:16:17.011 "data_offset": 2048, 00:16:17.011 "data_size": 63488 00:16:17.011 }, 00:16:17.011 { 00:16:17.011 "name": "BaseBdev3", 00:16:17.011 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:17.011 "is_configured": true, 00:16:17.011 "data_offset": 2048, 00:16:17.011 "data_size": 63488 00:16:17.011 }, 00:16:17.011 { 00:16:17.011 "name": "BaseBdev4", 00:16:17.011 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:17.011 "is_configured": true, 00:16:17.011 "data_offset": 2048, 00:16:17.011 "data_size": 63488 00:16:17.011 } 00:16:17.011 ] 00:16:17.011 }' 00:16:17.011 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.011 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.271 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.271 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.271 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.271 [2024-11-20 03:23:06.791011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.271 [2024-11-20 03:23:06.806897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:17.271 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.271 03:23:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:17.271 [2024-11-20 03:23:06.816886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.210 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.469 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.469 "name": "raid_bdev1", 00:16:18.469 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:18.469 "strip_size_kb": 64, 00:16:18.469 "state": "online", 00:16:18.469 "raid_level": "raid5f", 00:16:18.469 "superblock": true, 00:16:18.469 "num_base_bdevs": 4, 00:16:18.469 "num_base_bdevs_discovered": 4, 00:16:18.469 "num_base_bdevs_operational": 4, 00:16:18.469 "process": { 00:16:18.469 "type": "rebuild", 00:16:18.469 "target": "spare", 00:16:18.469 "progress": { 00:16:18.469 "blocks": 17280, 00:16:18.469 "percent": 9 00:16:18.469 } 00:16:18.469 }, 00:16:18.469 "base_bdevs_list": [ 00:16:18.469 { 00:16:18.469 "name": "spare", 00:16:18.469 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:18.470 "is_configured": true, 00:16:18.470 "data_offset": 2048, 00:16:18.470 "data_size": 63488 00:16:18.470 }, 00:16:18.470 { 00:16:18.470 "name": "BaseBdev2", 00:16:18.470 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:18.470 "is_configured": true, 00:16:18.470 "data_offset": 2048, 00:16:18.470 "data_size": 63488 00:16:18.470 }, 00:16:18.470 { 00:16:18.470 "name": "BaseBdev3", 00:16:18.470 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:18.470 "is_configured": true, 00:16:18.470 "data_offset": 2048, 00:16:18.470 "data_size": 63488 00:16:18.470 }, 00:16:18.470 { 00:16:18.470 "name": "BaseBdev4", 00:16:18.470 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:18.470 "is_configured": true, 00:16:18.470 "data_offset": 2048, 00:16:18.470 "data_size": 63488 00:16:18.470 } 00:16:18.470 ] 00:16:18.470 }' 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.470 03:23:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.470 [2024-11-20 03:23:07.947546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.470 [2024-11-20 03:23:08.025502] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.470 [2024-11-20 03:23:08.025592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.470 [2024-11-20 03:23:08.025622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.470 [2024-11-20 03:23:08.025633] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.470 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.729 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.729 "name": "raid_bdev1", 00:16:18.729 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:18.729 "strip_size_kb": 64, 00:16:18.729 "state": "online", 00:16:18.729 "raid_level": "raid5f", 00:16:18.729 "superblock": true, 00:16:18.729 "num_base_bdevs": 4, 00:16:18.729 "num_base_bdevs_discovered": 3, 00:16:18.729 "num_base_bdevs_operational": 3, 00:16:18.729 "base_bdevs_list": [ 00:16:18.729 { 00:16:18.729 "name": null, 00:16:18.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.729 "is_configured": false, 00:16:18.729 "data_offset": 0, 00:16:18.729 "data_size": 63488 00:16:18.729 }, 00:16:18.729 { 00:16:18.729 "name": "BaseBdev2", 00:16:18.729 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:18.729 "is_configured": true, 00:16:18.729 "data_offset": 2048, 00:16:18.729 "data_size": 63488 00:16:18.729 }, 00:16:18.729 { 00:16:18.729 "name": "BaseBdev3", 00:16:18.729 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:18.729 "is_configured": true, 00:16:18.729 "data_offset": 2048, 00:16:18.729 "data_size": 63488 00:16:18.729 }, 00:16:18.729 { 00:16:18.729 "name": "BaseBdev4", 00:16:18.729 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:18.729 "is_configured": true, 00:16:18.729 "data_offset": 2048, 00:16:18.729 "data_size": 63488 00:16:18.729 } 00:16:18.729 ] 00:16:18.729 }' 00:16:18.729 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.729 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.988 "name": "raid_bdev1", 00:16:18.988 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:18.988 "strip_size_kb": 64, 00:16:18.988 "state": "online", 00:16:18.988 "raid_level": "raid5f", 00:16:18.988 "superblock": true, 00:16:18.988 "num_base_bdevs": 4, 00:16:18.988 "num_base_bdevs_discovered": 3, 00:16:18.988 "num_base_bdevs_operational": 3, 00:16:18.988 "base_bdevs_list": [ 00:16:18.988 { 00:16:18.988 "name": null, 00:16:18.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.988 "is_configured": false, 00:16:18.988 "data_offset": 0, 00:16:18.988 "data_size": 63488 00:16:18.988 }, 00:16:18.988 { 00:16:18.988 "name": "BaseBdev2", 00:16:18.988 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:18.988 "is_configured": true, 00:16:18.988 "data_offset": 2048, 00:16:18.988 "data_size": 63488 00:16:18.988 }, 00:16:18.988 { 00:16:18.988 "name": "BaseBdev3", 00:16:18.988 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:18.988 "is_configured": true, 00:16:18.988 "data_offset": 2048, 00:16:18.988 "data_size": 63488 00:16:18.988 }, 00:16:18.988 { 00:16:18.988 "name": "BaseBdev4", 00:16:18.988 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:18.988 "is_configured": true, 00:16:18.988 "data_offset": 2048, 00:16:18.988 "data_size": 63488 00:16:18.988 } 00:16:18.988 ] 00:16:18.988 }' 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.988 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.248 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:19.248 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.248 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.248 [2024-11-20 03:23:08.627603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.248 [2024-11-20 03:23:08.644425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:19.248 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.248 03:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:19.248 [2024-11-20 03:23:08.655240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.185 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.185 "name": "raid_bdev1", 00:16:20.185 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:20.185 "strip_size_kb": 64, 00:16:20.185 "state": "online", 00:16:20.185 "raid_level": "raid5f", 00:16:20.185 "superblock": true, 00:16:20.185 "num_base_bdevs": 4, 00:16:20.185 "num_base_bdevs_discovered": 4, 00:16:20.185 "num_base_bdevs_operational": 4, 00:16:20.185 "process": { 00:16:20.185 "type": "rebuild", 00:16:20.185 "target": "spare", 00:16:20.185 "progress": { 00:16:20.185 "blocks": 19200, 00:16:20.185 "percent": 10 00:16:20.185 } 00:16:20.185 }, 00:16:20.185 "base_bdevs_list": [ 00:16:20.185 { 00:16:20.185 "name": "spare", 00:16:20.185 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.185 "data_size": 63488 00:16:20.185 }, 00:16:20.185 { 00:16:20.185 "name": "BaseBdev2", 00:16:20.185 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.185 "data_size": 63488 00:16:20.185 }, 00:16:20.185 { 00:16:20.185 "name": "BaseBdev3", 00:16:20.185 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.185 "data_size": 63488 00:16:20.185 }, 00:16:20.185 { 00:16:20.185 "name": "BaseBdev4", 00:16:20.185 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.186 "data_size": 63488 00:16:20.186 } 00:16:20.186 ] 00:16:20.186 }' 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:20.186 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=633 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.186 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.446 "name": "raid_bdev1", 00:16:20.446 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:20.446 "strip_size_kb": 64, 00:16:20.446 "state": "online", 00:16:20.446 "raid_level": "raid5f", 00:16:20.446 "superblock": true, 00:16:20.446 "num_base_bdevs": 4, 00:16:20.446 "num_base_bdevs_discovered": 4, 00:16:20.446 "num_base_bdevs_operational": 4, 00:16:20.446 "process": { 00:16:20.446 "type": "rebuild", 00:16:20.446 "target": "spare", 00:16:20.446 "progress": { 00:16:20.446 "blocks": 21120, 00:16:20.446 "percent": 11 00:16:20.446 } 00:16:20.446 }, 00:16:20.446 "base_bdevs_list": [ 00:16:20.446 { 00:16:20.446 "name": "spare", 00:16:20.446 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:20.446 "is_configured": true, 00:16:20.446 "data_offset": 2048, 00:16:20.446 "data_size": 63488 00:16:20.446 }, 00:16:20.446 { 00:16:20.446 "name": "BaseBdev2", 00:16:20.446 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:20.446 "is_configured": true, 00:16:20.446 "data_offset": 2048, 00:16:20.446 "data_size": 63488 00:16:20.446 }, 00:16:20.446 { 00:16:20.446 "name": "BaseBdev3", 00:16:20.446 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:20.446 "is_configured": true, 00:16:20.446 "data_offset": 2048, 00:16:20.446 "data_size": 63488 00:16:20.446 }, 00:16:20.446 { 00:16:20.446 "name": "BaseBdev4", 00:16:20.446 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:20.446 "is_configured": true, 00:16:20.446 "data_offset": 2048, 00:16:20.446 "data_size": 63488 00:16:20.446 } 00:16:20.446 ] 00:16:20.446 }' 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.446 03:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.387 03:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.387 03:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.387 "name": "raid_bdev1", 00:16:21.387 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:21.387 "strip_size_kb": 64, 00:16:21.387 "state": "online", 00:16:21.387 "raid_level": "raid5f", 00:16:21.387 "superblock": true, 00:16:21.387 "num_base_bdevs": 4, 00:16:21.387 "num_base_bdevs_discovered": 4, 00:16:21.387 "num_base_bdevs_operational": 4, 00:16:21.387 "process": { 00:16:21.387 "type": "rebuild", 00:16:21.387 "target": "spare", 00:16:21.387 "progress": { 00:16:21.387 "blocks": 44160, 00:16:21.387 "percent": 23 00:16:21.387 } 00:16:21.387 }, 00:16:21.387 "base_bdevs_list": [ 00:16:21.387 { 00:16:21.387 "name": "spare", 00:16:21.387 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:21.387 "is_configured": true, 00:16:21.387 "data_offset": 2048, 00:16:21.387 "data_size": 63488 00:16:21.387 }, 00:16:21.387 { 00:16:21.387 "name": "BaseBdev2", 00:16:21.387 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:21.387 "is_configured": true, 00:16:21.387 "data_offset": 2048, 00:16:21.387 "data_size": 63488 00:16:21.387 }, 00:16:21.387 { 00:16:21.387 "name": "BaseBdev3", 00:16:21.387 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:21.387 "is_configured": true, 00:16:21.387 "data_offset": 2048, 00:16:21.387 "data_size": 63488 00:16:21.387 }, 00:16:21.387 { 00:16:21.387 "name": "BaseBdev4", 00:16:21.387 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:21.387 "is_configured": true, 00:16:21.387 "data_offset": 2048, 00:16:21.387 "data_size": 63488 00:16:21.387 } 00:16:21.387 ] 00:16:21.387 }' 00:16:21.387 03:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.647 03:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.647 03:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.647 03:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.647 03:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.587 "name": "raid_bdev1", 00:16:22.587 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:22.587 "strip_size_kb": 64, 00:16:22.587 "state": "online", 00:16:22.587 "raid_level": "raid5f", 00:16:22.587 "superblock": true, 00:16:22.587 "num_base_bdevs": 4, 00:16:22.587 "num_base_bdevs_discovered": 4, 00:16:22.587 "num_base_bdevs_operational": 4, 00:16:22.587 "process": { 00:16:22.587 "type": "rebuild", 00:16:22.587 "target": "spare", 00:16:22.587 "progress": { 00:16:22.587 "blocks": 65280, 00:16:22.587 "percent": 34 00:16:22.587 } 00:16:22.587 }, 00:16:22.587 "base_bdevs_list": [ 00:16:22.587 { 00:16:22.587 "name": "spare", 00:16:22.587 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:22.587 "is_configured": true, 00:16:22.587 "data_offset": 2048, 00:16:22.587 "data_size": 63488 00:16:22.587 }, 00:16:22.587 { 00:16:22.587 "name": "BaseBdev2", 00:16:22.587 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:22.587 "is_configured": true, 00:16:22.587 "data_offset": 2048, 00:16:22.587 "data_size": 63488 00:16:22.587 }, 00:16:22.587 { 00:16:22.587 "name": "BaseBdev3", 00:16:22.587 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:22.587 "is_configured": true, 00:16:22.587 "data_offset": 2048, 00:16:22.587 "data_size": 63488 00:16:22.587 }, 00:16:22.587 { 00:16:22.587 "name": "BaseBdev4", 00:16:22.587 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:22.587 "is_configured": true, 00:16:22.587 "data_offset": 2048, 00:16:22.587 "data_size": 63488 00:16:22.587 } 00:16:22.587 ] 00:16:22.587 }' 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.587 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.847 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.847 03:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.787 "name": "raid_bdev1", 00:16:23.787 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:23.787 "strip_size_kb": 64, 00:16:23.787 "state": "online", 00:16:23.787 "raid_level": "raid5f", 00:16:23.787 "superblock": true, 00:16:23.787 "num_base_bdevs": 4, 00:16:23.787 "num_base_bdevs_discovered": 4, 00:16:23.787 "num_base_bdevs_operational": 4, 00:16:23.787 "process": { 00:16:23.787 "type": "rebuild", 00:16:23.787 "target": "spare", 00:16:23.787 "progress": { 00:16:23.787 "blocks": 86400, 00:16:23.787 "percent": 45 00:16:23.787 } 00:16:23.787 }, 00:16:23.787 "base_bdevs_list": [ 00:16:23.787 { 00:16:23.787 "name": "spare", 00:16:23.787 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:23.787 "is_configured": true, 00:16:23.787 "data_offset": 2048, 00:16:23.787 "data_size": 63488 00:16:23.787 }, 00:16:23.787 { 00:16:23.787 "name": "BaseBdev2", 00:16:23.787 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:23.787 "is_configured": true, 00:16:23.787 "data_offset": 2048, 00:16:23.787 "data_size": 63488 00:16:23.787 }, 00:16:23.787 { 00:16:23.787 "name": "BaseBdev3", 00:16:23.787 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:23.787 "is_configured": true, 00:16:23.787 "data_offset": 2048, 00:16:23.787 "data_size": 63488 00:16:23.787 }, 00:16:23.787 { 00:16:23.787 "name": "BaseBdev4", 00:16:23.787 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:23.787 "is_configured": true, 00:16:23.787 "data_offset": 2048, 00:16:23.787 "data_size": 63488 00:16:23.787 } 00:16:23.787 ] 00:16:23.787 }' 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.787 03:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.168 "name": "raid_bdev1", 00:16:25.168 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:25.168 "strip_size_kb": 64, 00:16:25.168 "state": "online", 00:16:25.168 "raid_level": "raid5f", 00:16:25.168 "superblock": true, 00:16:25.168 "num_base_bdevs": 4, 00:16:25.168 "num_base_bdevs_discovered": 4, 00:16:25.168 "num_base_bdevs_operational": 4, 00:16:25.168 "process": { 00:16:25.168 "type": "rebuild", 00:16:25.168 "target": "spare", 00:16:25.168 "progress": { 00:16:25.168 "blocks": 109440, 00:16:25.168 "percent": 57 00:16:25.168 } 00:16:25.168 }, 00:16:25.168 "base_bdevs_list": [ 00:16:25.168 { 00:16:25.168 "name": "spare", 00:16:25.168 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:25.168 "is_configured": true, 00:16:25.168 "data_offset": 2048, 00:16:25.168 "data_size": 63488 00:16:25.168 }, 00:16:25.168 { 00:16:25.168 "name": "BaseBdev2", 00:16:25.168 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:25.168 "is_configured": true, 00:16:25.168 "data_offset": 2048, 00:16:25.168 "data_size": 63488 00:16:25.168 }, 00:16:25.168 { 00:16:25.168 "name": "BaseBdev3", 00:16:25.168 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:25.168 "is_configured": true, 00:16:25.168 "data_offset": 2048, 00:16:25.168 "data_size": 63488 00:16:25.168 }, 00:16:25.168 { 00:16:25.168 "name": "BaseBdev4", 00:16:25.168 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:25.168 "is_configured": true, 00:16:25.168 "data_offset": 2048, 00:16:25.168 "data_size": 63488 00:16:25.168 } 00:16:25.168 ] 00:16:25.168 }' 00:16:25.168 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.169 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.169 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.169 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.169 03:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.108 "name": "raid_bdev1", 00:16:26.108 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:26.108 "strip_size_kb": 64, 00:16:26.108 "state": "online", 00:16:26.108 "raid_level": "raid5f", 00:16:26.108 "superblock": true, 00:16:26.108 "num_base_bdevs": 4, 00:16:26.108 "num_base_bdevs_discovered": 4, 00:16:26.108 "num_base_bdevs_operational": 4, 00:16:26.108 "process": { 00:16:26.108 "type": "rebuild", 00:16:26.108 "target": "spare", 00:16:26.108 "progress": { 00:16:26.108 "blocks": 130560, 00:16:26.108 "percent": 68 00:16:26.108 } 00:16:26.108 }, 00:16:26.108 "base_bdevs_list": [ 00:16:26.108 { 00:16:26.108 "name": "spare", 00:16:26.108 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:26.108 "is_configured": true, 00:16:26.108 "data_offset": 2048, 00:16:26.108 "data_size": 63488 00:16:26.108 }, 00:16:26.108 { 00:16:26.108 "name": "BaseBdev2", 00:16:26.108 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:26.108 "is_configured": true, 00:16:26.108 "data_offset": 2048, 00:16:26.108 "data_size": 63488 00:16:26.108 }, 00:16:26.108 { 00:16:26.108 "name": "BaseBdev3", 00:16:26.108 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:26.108 "is_configured": true, 00:16:26.108 "data_offset": 2048, 00:16:26.108 "data_size": 63488 00:16:26.108 }, 00:16:26.108 { 00:16:26.108 "name": "BaseBdev4", 00:16:26.108 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:26.108 "is_configured": true, 00:16:26.108 "data_offset": 2048, 00:16:26.108 "data_size": 63488 00:16:26.108 } 00:16:26.108 ] 00:16:26.108 }' 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.108 03:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.045 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.305 "name": "raid_bdev1", 00:16:27.305 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:27.305 "strip_size_kb": 64, 00:16:27.305 "state": "online", 00:16:27.305 "raid_level": "raid5f", 00:16:27.305 "superblock": true, 00:16:27.305 "num_base_bdevs": 4, 00:16:27.305 "num_base_bdevs_discovered": 4, 00:16:27.305 "num_base_bdevs_operational": 4, 00:16:27.305 "process": { 00:16:27.305 "type": "rebuild", 00:16:27.305 "target": "spare", 00:16:27.305 "progress": { 00:16:27.305 "blocks": 151680, 00:16:27.305 "percent": 79 00:16:27.305 } 00:16:27.305 }, 00:16:27.305 "base_bdevs_list": [ 00:16:27.305 { 00:16:27.305 "name": "spare", 00:16:27.305 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:27.305 "is_configured": true, 00:16:27.305 "data_offset": 2048, 00:16:27.305 "data_size": 63488 00:16:27.305 }, 00:16:27.305 { 00:16:27.305 "name": "BaseBdev2", 00:16:27.305 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:27.305 "is_configured": true, 00:16:27.305 "data_offset": 2048, 00:16:27.305 "data_size": 63488 00:16:27.305 }, 00:16:27.305 { 00:16:27.305 "name": "BaseBdev3", 00:16:27.305 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:27.305 "is_configured": true, 00:16:27.305 "data_offset": 2048, 00:16:27.305 "data_size": 63488 00:16:27.305 }, 00:16:27.305 { 00:16:27.305 "name": "BaseBdev4", 00:16:27.305 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:27.305 "is_configured": true, 00:16:27.305 "data_offset": 2048, 00:16:27.305 "data_size": 63488 00:16:27.305 } 00:16:27.305 ] 00:16:27.305 }' 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.305 03:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.241 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.500 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.500 "name": "raid_bdev1", 00:16:28.500 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:28.500 "strip_size_kb": 64, 00:16:28.500 "state": "online", 00:16:28.500 "raid_level": "raid5f", 00:16:28.500 "superblock": true, 00:16:28.500 "num_base_bdevs": 4, 00:16:28.500 "num_base_bdevs_discovered": 4, 00:16:28.500 "num_base_bdevs_operational": 4, 00:16:28.500 "process": { 00:16:28.500 "type": "rebuild", 00:16:28.500 "target": "spare", 00:16:28.500 "progress": { 00:16:28.500 "blocks": 174720, 00:16:28.500 "percent": 91 00:16:28.500 } 00:16:28.500 }, 00:16:28.500 "base_bdevs_list": [ 00:16:28.500 { 00:16:28.500 "name": "spare", 00:16:28.500 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:28.500 "is_configured": true, 00:16:28.500 "data_offset": 2048, 00:16:28.500 "data_size": 63488 00:16:28.500 }, 00:16:28.500 { 00:16:28.500 "name": "BaseBdev2", 00:16:28.500 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:28.500 "is_configured": true, 00:16:28.500 "data_offset": 2048, 00:16:28.500 "data_size": 63488 00:16:28.500 }, 00:16:28.500 { 00:16:28.500 "name": "BaseBdev3", 00:16:28.500 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:28.500 "is_configured": true, 00:16:28.500 "data_offset": 2048, 00:16:28.500 "data_size": 63488 00:16:28.500 }, 00:16:28.500 { 00:16:28.500 "name": "BaseBdev4", 00:16:28.500 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:28.500 "is_configured": true, 00:16:28.500 "data_offset": 2048, 00:16:28.500 "data_size": 63488 00:16:28.500 } 00:16:28.500 ] 00:16:28.500 }' 00:16:28.500 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.500 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.500 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.500 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.500 03:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.447 [2024-11-20 03:23:18.712077] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:29.447 [2024-11-20 03:23:18.712186] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:29.447 [2024-11-20 03:23:18.712787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.447 03:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.447 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.447 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.447 "name": "raid_bdev1", 00:16:29.447 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:29.447 "strip_size_kb": 64, 00:16:29.447 "state": "online", 00:16:29.447 "raid_level": "raid5f", 00:16:29.447 "superblock": true, 00:16:29.447 "num_base_bdevs": 4, 00:16:29.447 "num_base_bdevs_discovered": 4, 00:16:29.447 "num_base_bdevs_operational": 4, 00:16:29.447 "base_bdevs_list": [ 00:16:29.447 { 00:16:29.447 "name": "spare", 00:16:29.447 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:29.447 "is_configured": true, 00:16:29.447 "data_offset": 2048, 00:16:29.447 "data_size": 63488 00:16:29.447 }, 00:16:29.447 { 00:16:29.447 "name": "BaseBdev2", 00:16:29.447 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:29.447 "is_configured": true, 00:16:29.447 "data_offset": 2048, 00:16:29.447 "data_size": 63488 00:16:29.447 }, 00:16:29.447 { 00:16:29.447 "name": "BaseBdev3", 00:16:29.447 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:29.447 "is_configured": true, 00:16:29.447 "data_offset": 2048, 00:16:29.447 "data_size": 63488 00:16:29.447 }, 00:16:29.447 { 00:16:29.447 "name": "BaseBdev4", 00:16:29.447 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:29.447 "is_configured": true, 00:16:29.447 "data_offset": 2048, 00:16:29.447 "data_size": 63488 00:16:29.447 } 00:16:29.447 ] 00:16:29.447 }' 00:16:29.447 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.723 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:29.723 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.724 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.724 "name": "raid_bdev1", 00:16:29.724 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:29.724 "strip_size_kb": 64, 00:16:29.724 "state": "online", 00:16:29.724 "raid_level": "raid5f", 00:16:29.724 "superblock": true, 00:16:29.724 "num_base_bdevs": 4, 00:16:29.724 "num_base_bdevs_discovered": 4, 00:16:29.724 "num_base_bdevs_operational": 4, 00:16:29.724 "base_bdevs_list": [ 00:16:29.724 { 00:16:29.724 "name": "spare", 00:16:29.724 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:29.724 "is_configured": true, 00:16:29.724 "data_offset": 2048, 00:16:29.724 "data_size": 63488 00:16:29.724 }, 00:16:29.724 { 00:16:29.724 "name": "BaseBdev2", 00:16:29.724 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:29.724 "is_configured": true, 00:16:29.724 "data_offset": 2048, 00:16:29.724 "data_size": 63488 00:16:29.724 }, 00:16:29.724 { 00:16:29.724 "name": "BaseBdev3", 00:16:29.724 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:29.724 "is_configured": true, 00:16:29.724 "data_offset": 2048, 00:16:29.724 "data_size": 63488 00:16:29.724 }, 00:16:29.724 { 00:16:29.724 "name": "BaseBdev4", 00:16:29.724 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:29.724 "is_configured": true, 00:16:29.724 "data_offset": 2048, 00:16:29.724 "data_size": 63488 00:16:29.724 } 00:16:29.724 ] 00:16:29.724 }' 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.725 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.725 "name": "raid_bdev1", 00:16:29.725 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:29.725 "strip_size_kb": 64, 00:16:29.725 "state": "online", 00:16:29.725 "raid_level": "raid5f", 00:16:29.725 "superblock": true, 00:16:29.725 "num_base_bdevs": 4, 00:16:29.725 "num_base_bdevs_discovered": 4, 00:16:29.725 "num_base_bdevs_operational": 4, 00:16:29.725 "base_bdevs_list": [ 00:16:29.725 { 00:16:29.725 "name": "spare", 00:16:29.725 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:29.725 "is_configured": true, 00:16:29.725 "data_offset": 2048, 00:16:29.725 "data_size": 63488 00:16:29.725 }, 00:16:29.725 { 00:16:29.725 "name": "BaseBdev2", 00:16:29.725 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:29.725 "is_configured": true, 00:16:29.725 "data_offset": 2048, 00:16:29.725 "data_size": 63488 00:16:29.725 }, 00:16:29.725 { 00:16:29.725 "name": "BaseBdev3", 00:16:29.725 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:29.725 "is_configured": true, 00:16:29.725 "data_offset": 2048, 00:16:29.725 "data_size": 63488 00:16:29.725 }, 00:16:29.725 { 00:16:29.726 "name": "BaseBdev4", 00:16:29.726 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:29.726 "is_configured": true, 00:16:29.726 "data_offset": 2048, 00:16:29.726 "data_size": 63488 00:16:29.726 } 00:16:29.726 ] 00:16:29.726 }' 00:16:29.726 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.726 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.299 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.299 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.299 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.299 [2024-11-20 03:23:19.708060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.299 [2024-11-20 03:23:19.708090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.299 [2024-11-20 03:23:19.708169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.299 [2024-11-20 03:23:19.708262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.300 [2024-11-20 03:23:19.708283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.300 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:30.559 /dev/nbd0 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:30.559 03:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.559 1+0 records in 00:16:30.559 1+0 records out 00:16:30.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00155555 s, 2.6 MB/s 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.559 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:30.819 /dev/nbd1 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.819 1+0 records in 00:16:30.819 1+0 records out 00:16:30.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336832 s, 12.2 MB/s 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.819 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.079 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.339 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.340 [2024-11-20 03:23:20.855705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.340 [2024-11-20 03:23:20.855763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.340 [2024-11-20 03:23:20.855789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:31.340 [2024-11-20 03:23:20.855798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.340 [2024-11-20 03:23:20.857957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.340 [2024-11-20 03:23:20.857994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.340 [2024-11-20 03:23:20.858084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:31.340 [2024-11-20 03:23:20.858139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.340 [2024-11-20 03:23:20.858266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.340 [2024-11-20 03:23:20.858349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.340 [2024-11-20 03:23:20.858443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:31.340 spare 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.340 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.340 [2024-11-20 03:23:20.958338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:31.340 [2024-11-20 03:23:20.958411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:31.340 [2024-11-20 03:23:20.958717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:31.340 [2024-11-20 03:23:20.965475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:31.340 [2024-11-20 03:23:20.965527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:31.340 [2024-11-20 03:23:20.965754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.600 03:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.600 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.600 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.600 "name": "raid_bdev1", 00:16:31.600 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:31.600 "strip_size_kb": 64, 00:16:31.600 "state": "online", 00:16:31.600 "raid_level": "raid5f", 00:16:31.600 "superblock": true, 00:16:31.600 "num_base_bdevs": 4, 00:16:31.600 "num_base_bdevs_discovered": 4, 00:16:31.600 "num_base_bdevs_operational": 4, 00:16:31.600 "base_bdevs_list": [ 00:16:31.600 { 00:16:31.600 "name": "spare", 00:16:31.600 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:31.600 "is_configured": true, 00:16:31.600 "data_offset": 2048, 00:16:31.600 "data_size": 63488 00:16:31.600 }, 00:16:31.600 { 00:16:31.600 "name": "BaseBdev2", 00:16:31.600 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:31.600 "is_configured": true, 00:16:31.600 "data_offset": 2048, 00:16:31.600 "data_size": 63488 00:16:31.600 }, 00:16:31.600 { 00:16:31.600 "name": "BaseBdev3", 00:16:31.600 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:31.600 "is_configured": true, 00:16:31.600 "data_offset": 2048, 00:16:31.600 "data_size": 63488 00:16:31.600 }, 00:16:31.600 { 00:16:31.600 "name": "BaseBdev4", 00:16:31.600 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:31.600 "is_configured": true, 00:16:31.600 "data_offset": 2048, 00:16:31.600 "data_size": 63488 00:16:31.600 } 00:16:31.600 ] 00:16:31.600 }' 00:16:31.600 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.600 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.858 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.859 "name": "raid_bdev1", 00:16:31.859 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:31.859 "strip_size_kb": 64, 00:16:31.859 "state": "online", 00:16:31.859 "raid_level": "raid5f", 00:16:31.859 "superblock": true, 00:16:31.859 "num_base_bdevs": 4, 00:16:31.859 "num_base_bdevs_discovered": 4, 00:16:31.859 "num_base_bdevs_operational": 4, 00:16:31.859 "base_bdevs_list": [ 00:16:31.859 { 00:16:31.859 "name": "spare", 00:16:31.859 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "name": "BaseBdev2", 00:16:31.859 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "name": "BaseBdev3", 00:16:31.859 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "name": "BaseBdev4", 00:16:31.859 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 } 00:16:31.859 ] 00:16:31.859 }' 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.859 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.117 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.117 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.117 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:32.117 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.117 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.117 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.118 [2024-11-20 03:23:21.589109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.118 "name": "raid_bdev1", 00:16:32.118 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:32.118 "strip_size_kb": 64, 00:16:32.118 "state": "online", 00:16:32.118 "raid_level": "raid5f", 00:16:32.118 "superblock": true, 00:16:32.118 "num_base_bdevs": 4, 00:16:32.118 "num_base_bdevs_discovered": 3, 00:16:32.118 "num_base_bdevs_operational": 3, 00:16:32.118 "base_bdevs_list": [ 00:16:32.118 { 00:16:32.118 "name": null, 00:16:32.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.118 "is_configured": false, 00:16:32.118 "data_offset": 0, 00:16:32.118 "data_size": 63488 00:16:32.118 }, 00:16:32.118 { 00:16:32.118 "name": "BaseBdev2", 00:16:32.118 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:32.118 "is_configured": true, 00:16:32.118 "data_offset": 2048, 00:16:32.118 "data_size": 63488 00:16:32.118 }, 00:16:32.118 { 00:16:32.118 "name": "BaseBdev3", 00:16:32.118 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:32.118 "is_configured": true, 00:16:32.118 "data_offset": 2048, 00:16:32.118 "data_size": 63488 00:16:32.118 }, 00:16:32.118 { 00:16:32.118 "name": "BaseBdev4", 00:16:32.118 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:32.118 "is_configured": true, 00:16:32.118 "data_offset": 2048, 00:16:32.118 "data_size": 63488 00:16:32.118 } 00:16:32.118 ] 00:16:32.118 }' 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.118 03:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.687 03:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.687 03:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.687 03:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.687 [2024-11-20 03:23:22.044353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.687 [2024-11-20 03:23:22.044578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:32.687 [2024-11-20 03:23:22.044689] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:32.687 [2024-11-20 03:23:22.044764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.687 [2024-11-20 03:23:22.059145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:32.687 03:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.687 03:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:32.687 [2024-11-20 03:23:22.067423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.626 "name": "raid_bdev1", 00:16:33.626 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:33.626 "strip_size_kb": 64, 00:16:33.626 "state": "online", 00:16:33.626 "raid_level": "raid5f", 00:16:33.626 "superblock": true, 00:16:33.626 "num_base_bdevs": 4, 00:16:33.626 "num_base_bdevs_discovered": 4, 00:16:33.626 "num_base_bdevs_operational": 4, 00:16:33.626 "process": { 00:16:33.626 "type": "rebuild", 00:16:33.626 "target": "spare", 00:16:33.626 "progress": { 00:16:33.626 "blocks": 19200, 00:16:33.626 "percent": 10 00:16:33.626 } 00:16:33.626 }, 00:16:33.626 "base_bdevs_list": [ 00:16:33.626 { 00:16:33.626 "name": "spare", 00:16:33.626 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:33.626 "is_configured": true, 00:16:33.626 "data_offset": 2048, 00:16:33.626 "data_size": 63488 00:16:33.626 }, 00:16:33.626 { 00:16:33.626 "name": "BaseBdev2", 00:16:33.626 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:33.626 "is_configured": true, 00:16:33.626 "data_offset": 2048, 00:16:33.626 "data_size": 63488 00:16:33.626 }, 00:16:33.626 { 00:16:33.626 "name": "BaseBdev3", 00:16:33.626 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:33.626 "is_configured": true, 00:16:33.626 "data_offset": 2048, 00:16:33.626 "data_size": 63488 00:16:33.626 }, 00:16:33.626 { 00:16:33.626 "name": "BaseBdev4", 00:16:33.626 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:33.626 "is_configured": true, 00:16:33.626 "data_offset": 2048, 00:16:33.626 "data_size": 63488 00:16:33.626 } 00:16:33.626 ] 00:16:33.626 }' 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.626 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.626 [2024-11-20 03:23:23.218537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.886 [2024-11-20 03:23:23.273145] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.886 [2024-11-20 03:23:23.273223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.886 [2024-11-20 03:23:23.273239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.886 [2024-11-20 03:23:23.273248] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.886 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.886 "name": "raid_bdev1", 00:16:33.886 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:33.886 "strip_size_kb": 64, 00:16:33.886 "state": "online", 00:16:33.886 "raid_level": "raid5f", 00:16:33.886 "superblock": true, 00:16:33.886 "num_base_bdevs": 4, 00:16:33.886 "num_base_bdevs_discovered": 3, 00:16:33.886 "num_base_bdevs_operational": 3, 00:16:33.886 "base_bdevs_list": [ 00:16:33.886 { 00:16:33.886 "name": null, 00:16:33.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.886 "is_configured": false, 00:16:33.886 "data_offset": 0, 00:16:33.886 "data_size": 63488 00:16:33.886 }, 00:16:33.886 { 00:16:33.886 "name": "BaseBdev2", 00:16:33.886 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:33.886 "is_configured": true, 00:16:33.886 "data_offset": 2048, 00:16:33.886 "data_size": 63488 00:16:33.886 }, 00:16:33.886 { 00:16:33.886 "name": "BaseBdev3", 00:16:33.886 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:33.886 "is_configured": true, 00:16:33.887 "data_offset": 2048, 00:16:33.887 "data_size": 63488 00:16:33.887 }, 00:16:33.887 { 00:16:33.887 "name": "BaseBdev4", 00:16:33.887 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:33.887 "is_configured": true, 00:16:33.887 "data_offset": 2048, 00:16:33.887 "data_size": 63488 00:16:33.887 } 00:16:33.887 ] 00:16:33.887 }' 00:16:33.887 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.887 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.146 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.146 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.146 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.146 [2024-11-20 03:23:23.682580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.146 [2024-11-20 03:23:23.682718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.146 [2024-11-20 03:23:23.682771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:34.146 [2024-11-20 03:23:23.682819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.146 [2024-11-20 03:23:23.683399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.146 [2024-11-20 03:23:23.683478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.146 [2024-11-20 03:23:23.683662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.146 [2024-11-20 03:23:23.683714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:34.146 [2024-11-20 03:23:23.683776] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:34.146 [2024-11-20 03:23:23.683843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.147 [2024-11-20 03:23:23.698896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:34.147 spare 00:16:34.147 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.147 03:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:34.147 [2024-11-20 03:23:23.707942] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.086 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.347 "name": "raid_bdev1", 00:16:35.347 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:35.347 "strip_size_kb": 64, 00:16:35.347 "state": "online", 00:16:35.347 "raid_level": "raid5f", 00:16:35.347 "superblock": true, 00:16:35.347 "num_base_bdevs": 4, 00:16:35.347 "num_base_bdevs_discovered": 4, 00:16:35.347 "num_base_bdevs_operational": 4, 00:16:35.347 "process": { 00:16:35.347 "type": "rebuild", 00:16:35.347 "target": "spare", 00:16:35.347 "progress": { 00:16:35.347 "blocks": 19200, 00:16:35.347 "percent": 10 00:16:35.347 } 00:16:35.347 }, 00:16:35.347 "base_bdevs_list": [ 00:16:35.347 { 00:16:35.347 "name": "spare", 00:16:35.347 "uuid": "35a5afbb-4b69-548b-830e-4f186227d1f7", 00:16:35.347 "is_configured": true, 00:16:35.347 "data_offset": 2048, 00:16:35.347 "data_size": 63488 00:16:35.347 }, 00:16:35.347 { 00:16:35.347 "name": "BaseBdev2", 00:16:35.347 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:35.347 "is_configured": true, 00:16:35.347 "data_offset": 2048, 00:16:35.347 "data_size": 63488 00:16:35.347 }, 00:16:35.347 { 00:16:35.347 "name": "BaseBdev3", 00:16:35.347 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:35.347 "is_configured": true, 00:16:35.347 "data_offset": 2048, 00:16:35.347 "data_size": 63488 00:16:35.347 }, 00:16:35.347 { 00:16:35.347 "name": "BaseBdev4", 00:16:35.347 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:35.347 "is_configured": true, 00:16:35.347 "data_offset": 2048, 00:16:35.347 "data_size": 63488 00:16:35.347 } 00:16:35.347 ] 00:16:35.347 }' 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.347 [2024-11-20 03:23:24.858987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.347 [2024-11-20 03:23:24.914096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.347 [2024-11-20 03:23:24.914188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.347 [2024-11-20 03:23:24.914225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.347 [2024-11-20 03:23:24.914232] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.347 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.607 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.607 "name": "raid_bdev1", 00:16:35.607 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:35.607 "strip_size_kb": 64, 00:16:35.607 "state": "online", 00:16:35.607 "raid_level": "raid5f", 00:16:35.607 "superblock": true, 00:16:35.607 "num_base_bdevs": 4, 00:16:35.607 "num_base_bdevs_discovered": 3, 00:16:35.607 "num_base_bdevs_operational": 3, 00:16:35.607 "base_bdevs_list": [ 00:16:35.607 { 00:16:35.607 "name": null, 00:16:35.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.607 "is_configured": false, 00:16:35.607 "data_offset": 0, 00:16:35.607 "data_size": 63488 00:16:35.607 }, 00:16:35.607 { 00:16:35.607 "name": "BaseBdev2", 00:16:35.607 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:35.607 "is_configured": true, 00:16:35.607 "data_offset": 2048, 00:16:35.607 "data_size": 63488 00:16:35.607 }, 00:16:35.607 { 00:16:35.607 "name": "BaseBdev3", 00:16:35.607 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:35.607 "is_configured": true, 00:16:35.607 "data_offset": 2048, 00:16:35.607 "data_size": 63488 00:16:35.607 }, 00:16:35.607 { 00:16:35.607 "name": "BaseBdev4", 00:16:35.607 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:35.607 "is_configured": true, 00:16:35.607 "data_offset": 2048, 00:16:35.607 "data_size": 63488 00:16:35.607 } 00:16:35.607 ] 00:16:35.607 }' 00:16:35.607 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.607 03:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.867 "name": "raid_bdev1", 00:16:35.867 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:35.867 "strip_size_kb": 64, 00:16:35.867 "state": "online", 00:16:35.867 "raid_level": "raid5f", 00:16:35.867 "superblock": true, 00:16:35.867 "num_base_bdevs": 4, 00:16:35.867 "num_base_bdevs_discovered": 3, 00:16:35.867 "num_base_bdevs_operational": 3, 00:16:35.867 "base_bdevs_list": [ 00:16:35.867 { 00:16:35.867 "name": null, 00:16:35.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.867 "is_configured": false, 00:16:35.867 "data_offset": 0, 00:16:35.867 "data_size": 63488 00:16:35.867 }, 00:16:35.867 { 00:16:35.867 "name": "BaseBdev2", 00:16:35.867 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:35.867 "is_configured": true, 00:16:35.867 "data_offset": 2048, 00:16:35.867 "data_size": 63488 00:16:35.867 }, 00:16:35.867 { 00:16:35.867 "name": "BaseBdev3", 00:16:35.867 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:35.867 "is_configured": true, 00:16:35.867 "data_offset": 2048, 00:16:35.867 "data_size": 63488 00:16:35.867 }, 00:16:35.867 { 00:16:35.867 "name": "BaseBdev4", 00:16:35.867 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:35.867 "is_configured": true, 00:16:35.867 "data_offset": 2048, 00:16:35.867 "data_size": 63488 00:16:35.867 } 00:16:35.867 ] 00:16:35.867 }' 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.867 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.127 [2024-11-20 03:23:25.546030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.127 [2024-11-20 03:23:25.546080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.127 [2024-11-20 03:23:25.546099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:36.127 [2024-11-20 03:23:25.546108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.127 [2024-11-20 03:23:25.546570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.127 [2024-11-20 03:23:25.546589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.127 [2024-11-20 03:23:25.546677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:36.127 [2024-11-20 03:23:25.546692] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.127 [2024-11-20 03:23:25.546706] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:36.127 [2024-11-20 03:23:25.546717] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:36.127 BaseBdev1 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.127 03:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.067 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.068 "name": "raid_bdev1", 00:16:37.068 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:37.068 "strip_size_kb": 64, 00:16:37.068 "state": "online", 00:16:37.068 "raid_level": "raid5f", 00:16:37.068 "superblock": true, 00:16:37.068 "num_base_bdevs": 4, 00:16:37.068 "num_base_bdevs_discovered": 3, 00:16:37.068 "num_base_bdevs_operational": 3, 00:16:37.068 "base_bdevs_list": [ 00:16:37.068 { 00:16:37.068 "name": null, 00:16:37.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.068 "is_configured": false, 00:16:37.068 "data_offset": 0, 00:16:37.068 "data_size": 63488 00:16:37.068 }, 00:16:37.068 { 00:16:37.068 "name": "BaseBdev2", 00:16:37.068 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:37.068 "is_configured": true, 00:16:37.068 "data_offset": 2048, 00:16:37.068 "data_size": 63488 00:16:37.068 }, 00:16:37.068 { 00:16:37.068 "name": "BaseBdev3", 00:16:37.068 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:37.068 "is_configured": true, 00:16:37.068 "data_offset": 2048, 00:16:37.068 "data_size": 63488 00:16:37.068 }, 00:16:37.068 { 00:16:37.068 "name": "BaseBdev4", 00:16:37.068 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:37.068 "is_configured": true, 00:16:37.068 "data_offset": 2048, 00:16:37.068 "data_size": 63488 00:16:37.068 } 00:16:37.068 ] 00:16:37.068 }' 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.068 03:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.639 "name": "raid_bdev1", 00:16:37.639 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:37.639 "strip_size_kb": 64, 00:16:37.639 "state": "online", 00:16:37.639 "raid_level": "raid5f", 00:16:37.639 "superblock": true, 00:16:37.639 "num_base_bdevs": 4, 00:16:37.639 "num_base_bdevs_discovered": 3, 00:16:37.639 "num_base_bdevs_operational": 3, 00:16:37.639 "base_bdevs_list": [ 00:16:37.639 { 00:16:37.639 "name": null, 00:16:37.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.639 "is_configured": false, 00:16:37.639 "data_offset": 0, 00:16:37.639 "data_size": 63488 00:16:37.639 }, 00:16:37.639 { 00:16:37.639 "name": "BaseBdev2", 00:16:37.639 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:37.639 "is_configured": true, 00:16:37.639 "data_offset": 2048, 00:16:37.639 "data_size": 63488 00:16:37.639 }, 00:16:37.639 { 00:16:37.639 "name": "BaseBdev3", 00:16:37.639 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:37.639 "is_configured": true, 00:16:37.639 "data_offset": 2048, 00:16:37.639 "data_size": 63488 00:16:37.639 }, 00:16:37.639 { 00:16:37.639 "name": "BaseBdev4", 00:16:37.639 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:37.639 "is_configured": true, 00:16:37.639 "data_offset": 2048, 00:16:37.639 "data_size": 63488 00:16:37.639 } 00:16:37.639 ] 00:16:37.639 }' 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.639 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.640 [2024-11-20 03:23:27.171313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.640 [2024-11-20 03:23:27.171494] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.640 [2024-11-20 03:23:27.171513] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:37.640 request: 00:16:37.640 { 00:16:37.640 "base_bdev": "BaseBdev1", 00:16:37.640 "raid_bdev": "raid_bdev1", 00:16:37.640 "method": "bdev_raid_add_base_bdev", 00:16:37.640 "req_id": 1 00:16:37.640 } 00:16:37.640 Got JSON-RPC error response 00:16:37.640 response: 00:16:37.640 { 00:16:37.640 "code": -22, 00:16:37.640 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:37.640 } 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.640 03:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:38.581 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.581 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.582 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.841 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.841 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.841 "name": "raid_bdev1", 00:16:38.841 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:38.842 "strip_size_kb": 64, 00:16:38.842 "state": "online", 00:16:38.842 "raid_level": "raid5f", 00:16:38.842 "superblock": true, 00:16:38.842 "num_base_bdevs": 4, 00:16:38.842 "num_base_bdevs_discovered": 3, 00:16:38.842 "num_base_bdevs_operational": 3, 00:16:38.842 "base_bdevs_list": [ 00:16:38.842 { 00:16:38.842 "name": null, 00:16:38.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.842 "is_configured": false, 00:16:38.842 "data_offset": 0, 00:16:38.842 "data_size": 63488 00:16:38.842 }, 00:16:38.842 { 00:16:38.842 "name": "BaseBdev2", 00:16:38.842 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:38.842 "is_configured": true, 00:16:38.842 "data_offset": 2048, 00:16:38.842 "data_size": 63488 00:16:38.842 }, 00:16:38.842 { 00:16:38.842 "name": "BaseBdev3", 00:16:38.842 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:38.842 "is_configured": true, 00:16:38.842 "data_offset": 2048, 00:16:38.842 "data_size": 63488 00:16:38.842 }, 00:16:38.842 { 00:16:38.842 "name": "BaseBdev4", 00:16:38.842 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:38.842 "is_configured": true, 00:16:38.842 "data_offset": 2048, 00:16:38.842 "data_size": 63488 00:16:38.842 } 00:16:38.842 ] 00:16:38.842 }' 00:16:38.842 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.842 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.102 "name": "raid_bdev1", 00:16:39.102 "uuid": "a48f5ecd-bffb-422b-ada9-784db5c23d82", 00:16:39.102 "strip_size_kb": 64, 00:16:39.102 "state": "online", 00:16:39.102 "raid_level": "raid5f", 00:16:39.102 "superblock": true, 00:16:39.102 "num_base_bdevs": 4, 00:16:39.102 "num_base_bdevs_discovered": 3, 00:16:39.102 "num_base_bdevs_operational": 3, 00:16:39.102 "base_bdevs_list": [ 00:16:39.102 { 00:16:39.102 "name": null, 00:16:39.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.102 "is_configured": false, 00:16:39.102 "data_offset": 0, 00:16:39.102 "data_size": 63488 00:16:39.102 }, 00:16:39.102 { 00:16:39.102 "name": "BaseBdev2", 00:16:39.102 "uuid": "bd07fe84-1aa8-58f2-91f2-94301e5bed10", 00:16:39.102 "is_configured": true, 00:16:39.102 "data_offset": 2048, 00:16:39.102 "data_size": 63488 00:16:39.102 }, 00:16:39.102 { 00:16:39.102 "name": "BaseBdev3", 00:16:39.102 "uuid": "394277d2-b51f-5718-9695-32b7b8e7eef0", 00:16:39.102 "is_configured": true, 00:16:39.102 "data_offset": 2048, 00:16:39.102 "data_size": 63488 00:16:39.102 }, 00:16:39.102 { 00:16:39.102 "name": "BaseBdev4", 00:16:39.102 "uuid": "a4a6bcd5-77f5-5dea-843a-ba35e950780f", 00:16:39.102 "is_configured": true, 00:16:39.102 "data_offset": 2048, 00:16:39.102 "data_size": 63488 00:16:39.102 } 00:16:39.102 ] 00:16:39.102 }' 00:16:39.102 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84921 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84921 ']' 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84921 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84921 00:16:39.363 killing process with pid 84921 00:16:39.363 Received shutdown signal, test time was about 60.000000 seconds 00:16:39.363 00:16:39.363 Latency(us) 00:16:39.363 [2024-11-20T03:23:28.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.363 [2024-11-20T03:23:28.998Z] =================================================================================================================== 00:16:39.363 [2024-11-20T03:23:28.998Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84921' 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84921 00:16:39.363 [2024-11-20 03:23:28.814755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.363 [2024-11-20 03:23:28.814872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.363 03:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84921 00:16:39.363 [2024-11-20 03:23:28.814943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.363 [2024-11-20 03:23:28.814955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:39.933 [2024-11-20 03:23:29.276436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.875 03:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:40.875 00:16:40.875 real 0m26.657s 00:16:40.875 user 0m33.524s 00:16:40.875 sys 0m2.886s 00:16:40.875 ************************************ 00:16:40.875 END TEST raid5f_rebuild_test_sb 00:16:40.875 ************************************ 00:16:40.875 03:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.875 03:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.875 03:23:30 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:40.875 03:23:30 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:40.875 03:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:40.875 03:23:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.875 03:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.875 ************************************ 00:16:40.875 START TEST raid_state_function_test_sb_4k 00:16:40.875 ************************************ 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:40.875 Process raid pid: 85738 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85738 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85738' 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85738 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85738 ']' 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.875 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.876 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.876 03:23:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.876 [2024-11-20 03:23:30.473236] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:40.876 [2024-11-20 03:23:30.473441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.136 [2024-11-20 03:23:30.645836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.136 [2024-11-20 03:23:30.748355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.395 [2024-11-20 03:23:30.940693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.395 [2024-11-20 03:23:30.940801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.963 [2024-11-20 03:23:31.297180] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.963 [2024-11-20 03:23:31.297233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.963 [2024-11-20 03:23:31.297244] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.963 [2024-11-20 03:23:31.297253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.963 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.964 "name": "Existed_Raid", 00:16:41.964 "uuid": "0eb7b625-bef9-4187-9f0b-104cb201509a", 00:16:41.964 "strip_size_kb": 0, 00:16:41.964 "state": "configuring", 00:16:41.964 "raid_level": "raid1", 00:16:41.964 "superblock": true, 00:16:41.964 "num_base_bdevs": 2, 00:16:41.964 "num_base_bdevs_discovered": 0, 00:16:41.964 "num_base_bdevs_operational": 2, 00:16:41.964 "base_bdevs_list": [ 00:16:41.964 { 00:16:41.964 "name": "BaseBdev1", 00:16:41.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.964 "is_configured": false, 00:16:41.964 "data_offset": 0, 00:16:41.964 "data_size": 0 00:16:41.964 }, 00:16:41.964 { 00:16:41.964 "name": "BaseBdev2", 00:16:41.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.964 "is_configured": false, 00:16:41.964 "data_offset": 0, 00:16:41.964 "data_size": 0 00:16:41.964 } 00:16:41.964 ] 00:16:41.964 }' 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.964 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.222 [2024-11-20 03:23:31.768372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.222 [2024-11-20 03:23:31.768501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.222 [2024-11-20 03:23:31.776368] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.222 [2024-11-20 03:23:31.776476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.222 [2024-11-20 03:23:31.776521] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.222 [2024-11-20 03:23:31.776561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.222 [2024-11-20 03:23:31.827065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.222 BaseBdev1 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.222 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.222 [ 00:16:42.222 { 00:16:42.222 "name": "BaseBdev1", 00:16:42.222 "aliases": [ 00:16:42.222 "f684e121-3d9b-453b-8cce-1b6fbd2c6f7b" 00:16:42.222 ], 00:16:42.222 "product_name": "Malloc disk", 00:16:42.222 "block_size": 4096, 00:16:42.222 "num_blocks": 8192, 00:16:42.222 "uuid": "f684e121-3d9b-453b-8cce-1b6fbd2c6f7b", 00:16:42.222 "assigned_rate_limits": { 00:16:42.222 "rw_ios_per_sec": 0, 00:16:42.222 "rw_mbytes_per_sec": 0, 00:16:42.222 "r_mbytes_per_sec": 0, 00:16:42.222 "w_mbytes_per_sec": 0 00:16:42.222 }, 00:16:42.222 "claimed": true, 00:16:42.222 "claim_type": "exclusive_write", 00:16:42.223 "zoned": false, 00:16:42.223 "supported_io_types": { 00:16:42.223 "read": true, 00:16:42.223 "write": true, 00:16:42.223 "unmap": true, 00:16:42.223 "flush": true, 00:16:42.223 "reset": true, 00:16:42.223 "nvme_admin": false, 00:16:42.223 "nvme_io": false, 00:16:42.223 "nvme_io_md": false, 00:16:42.223 "write_zeroes": true, 00:16:42.223 "zcopy": true, 00:16:42.223 "get_zone_info": false, 00:16:42.223 "zone_management": false, 00:16:42.223 "zone_append": false, 00:16:42.223 "compare": false, 00:16:42.223 "compare_and_write": false, 00:16:42.223 "abort": true, 00:16:42.223 "seek_hole": false, 00:16:42.223 "seek_data": false, 00:16:42.223 "copy": true, 00:16:42.223 "nvme_iov_md": false 00:16:42.223 }, 00:16:42.223 "memory_domains": [ 00:16:42.223 { 00:16:42.223 "dma_device_id": "system", 00:16:42.223 "dma_device_type": 1 00:16:42.223 }, 00:16:42.223 { 00:16:42.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.223 "dma_device_type": 2 00:16:42.223 } 00:16:42.223 ], 00:16:42.223 "driver_specific": {} 00:16:42.223 } 00:16:42.223 ] 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.223 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.481 "name": "Existed_Raid", 00:16:42.481 "uuid": "6db0bf39-3ddc-40f8-8243-13972e6c9297", 00:16:42.481 "strip_size_kb": 0, 00:16:42.481 "state": "configuring", 00:16:42.481 "raid_level": "raid1", 00:16:42.481 "superblock": true, 00:16:42.481 "num_base_bdevs": 2, 00:16:42.481 "num_base_bdevs_discovered": 1, 00:16:42.481 "num_base_bdevs_operational": 2, 00:16:42.481 "base_bdevs_list": [ 00:16:42.481 { 00:16:42.481 "name": "BaseBdev1", 00:16:42.481 "uuid": "f684e121-3d9b-453b-8cce-1b6fbd2c6f7b", 00:16:42.481 "is_configured": true, 00:16:42.481 "data_offset": 256, 00:16:42.481 "data_size": 7936 00:16:42.481 }, 00:16:42.481 { 00:16:42.481 "name": "BaseBdev2", 00:16:42.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.481 "is_configured": false, 00:16:42.481 "data_offset": 0, 00:16:42.481 "data_size": 0 00:16:42.481 } 00:16:42.481 ] 00:16:42.481 }' 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.481 03:23:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.739 [2024-11-20 03:23:32.330574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.739 [2024-11-20 03:23:32.330756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.739 [2024-11-20 03:23:32.338666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.739 [2024-11-20 03:23:32.340926] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.739 [2024-11-20 03:23:32.341031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.739 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.998 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.998 "name": "Existed_Raid", 00:16:42.998 "uuid": "35999195-2028-4867-99c1-40485408d24d", 00:16:42.998 "strip_size_kb": 0, 00:16:42.998 "state": "configuring", 00:16:42.998 "raid_level": "raid1", 00:16:42.998 "superblock": true, 00:16:42.998 "num_base_bdevs": 2, 00:16:42.998 "num_base_bdevs_discovered": 1, 00:16:42.998 "num_base_bdevs_operational": 2, 00:16:42.998 "base_bdevs_list": [ 00:16:42.998 { 00:16:42.998 "name": "BaseBdev1", 00:16:42.998 "uuid": "f684e121-3d9b-453b-8cce-1b6fbd2c6f7b", 00:16:42.998 "is_configured": true, 00:16:42.998 "data_offset": 256, 00:16:42.998 "data_size": 7936 00:16:42.998 }, 00:16:42.998 { 00:16:42.998 "name": "BaseBdev2", 00:16:42.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.998 "is_configured": false, 00:16:42.998 "data_offset": 0, 00:16:42.998 "data_size": 0 00:16:42.998 } 00:16:42.998 ] 00:16:42.998 }' 00:16:42.998 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.998 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.257 [2024-11-20 03:23:32.857880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.257 [2024-11-20 03:23:32.858288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:43.257 [2024-11-20 03:23:32.858315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:43.257 BaseBdev2 00:16:43.257 [2024-11-20 03:23:32.858753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:43.257 [2024-11-20 03:23:32.858996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:43.257 [2024-11-20 03:23:32.859021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:43.257 [2024-11-20 03:23:32.859247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.257 [ 00:16:43.257 { 00:16:43.257 "name": "BaseBdev2", 00:16:43.257 "aliases": [ 00:16:43.257 "9df88464-4cdc-47a6-9cac-431981e383c4" 00:16:43.257 ], 00:16:43.257 "product_name": "Malloc disk", 00:16:43.257 "block_size": 4096, 00:16:43.257 "num_blocks": 8192, 00:16:43.257 "uuid": "9df88464-4cdc-47a6-9cac-431981e383c4", 00:16:43.257 "assigned_rate_limits": { 00:16:43.257 "rw_ios_per_sec": 0, 00:16:43.257 "rw_mbytes_per_sec": 0, 00:16:43.257 "r_mbytes_per_sec": 0, 00:16:43.257 "w_mbytes_per_sec": 0 00:16:43.257 }, 00:16:43.257 "claimed": true, 00:16:43.257 "claim_type": "exclusive_write", 00:16:43.257 "zoned": false, 00:16:43.257 "supported_io_types": { 00:16:43.257 "read": true, 00:16:43.257 "write": true, 00:16:43.257 "unmap": true, 00:16:43.257 "flush": true, 00:16:43.257 "reset": true, 00:16:43.257 "nvme_admin": false, 00:16:43.257 "nvme_io": false, 00:16:43.257 "nvme_io_md": false, 00:16:43.257 "write_zeroes": true, 00:16:43.257 "zcopy": true, 00:16:43.257 "get_zone_info": false, 00:16:43.257 "zone_management": false, 00:16:43.257 "zone_append": false, 00:16:43.257 "compare": false, 00:16:43.257 "compare_and_write": false, 00:16:43.257 "abort": true, 00:16:43.257 "seek_hole": false, 00:16:43.257 "seek_data": false, 00:16:43.257 "copy": true, 00:16:43.257 "nvme_iov_md": false 00:16:43.257 }, 00:16:43.257 "memory_domains": [ 00:16:43.257 { 00:16:43.257 "dma_device_id": "system", 00:16:43.257 "dma_device_type": 1 00:16:43.257 }, 00:16:43.257 { 00:16:43.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.257 "dma_device_type": 2 00:16:43.257 } 00:16:43.257 ], 00:16:43.257 "driver_specific": {} 00:16:43.257 } 00:16:43.257 ] 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.257 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.516 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.516 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.516 "name": "Existed_Raid", 00:16:43.516 "uuid": "35999195-2028-4867-99c1-40485408d24d", 00:16:43.516 "strip_size_kb": 0, 00:16:43.516 "state": "online", 00:16:43.516 "raid_level": "raid1", 00:16:43.516 "superblock": true, 00:16:43.516 "num_base_bdevs": 2, 00:16:43.516 "num_base_bdevs_discovered": 2, 00:16:43.516 "num_base_bdevs_operational": 2, 00:16:43.516 "base_bdevs_list": [ 00:16:43.516 { 00:16:43.516 "name": "BaseBdev1", 00:16:43.516 "uuid": "f684e121-3d9b-453b-8cce-1b6fbd2c6f7b", 00:16:43.516 "is_configured": true, 00:16:43.516 "data_offset": 256, 00:16:43.516 "data_size": 7936 00:16:43.516 }, 00:16:43.516 { 00:16:43.516 "name": "BaseBdev2", 00:16:43.516 "uuid": "9df88464-4cdc-47a6-9cac-431981e383c4", 00:16:43.516 "is_configured": true, 00:16:43.516 "data_offset": 256, 00:16:43.516 "data_size": 7936 00:16:43.516 } 00:16:43.516 ] 00:16:43.516 }' 00:16:43.516 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.516 03:23:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.775 [2024-11-20 03:23:33.345473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.775 "name": "Existed_Raid", 00:16:43.775 "aliases": [ 00:16:43.775 "35999195-2028-4867-99c1-40485408d24d" 00:16:43.775 ], 00:16:43.775 "product_name": "Raid Volume", 00:16:43.775 "block_size": 4096, 00:16:43.775 "num_blocks": 7936, 00:16:43.775 "uuid": "35999195-2028-4867-99c1-40485408d24d", 00:16:43.775 "assigned_rate_limits": { 00:16:43.775 "rw_ios_per_sec": 0, 00:16:43.775 "rw_mbytes_per_sec": 0, 00:16:43.775 "r_mbytes_per_sec": 0, 00:16:43.775 "w_mbytes_per_sec": 0 00:16:43.775 }, 00:16:43.775 "claimed": false, 00:16:43.775 "zoned": false, 00:16:43.775 "supported_io_types": { 00:16:43.775 "read": true, 00:16:43.775 "write": true, 00:16:43.775 "unmap": false, 00:16:43.775 "flush": false, 00:16:43.775 "reset": true, 00:16:43.775 "nvme_admin": false, 00:16:43.775 "nvme_io": false, 00:16:43.775 "nvme_io_md": false, 00:16:43.775 "write_zeroes": true, 00:16:43.775 "zcopy": false, 00:16:43.775 "get_zone_info": false, 00:16:43.775 "zone_management": false, 00:16:43.775 "zone_append": false, 00:16:43.775 "compare": false, 00:16:43.775 "compare_and_write": false, 00:16:43.775 "abort": false, 00:16:43.775 "seek_hole": false, 00:16:43.775 "seek_data": false, 00:16:43.775 "copy": false, 00:16:43.775 "nvme_iov_md": false 00:16:43.775 }, 00:16:43.775 "memory_domains": [ 00:16:43.775 { 00:16:43.775 "dma_device_id": "system", 00:16:43.775 "dma_device_type": 1 00:16:43.775 }, 00:16:43.775 { 00:16:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.775 "dma_device_type": 2 00:16:43.775 }, 00:16:43.775 { 00:16:43.775 "dma_device_id": "system", 00:16:43.775 "dma_device_type": 1 00:16:43.775 }, 00:16:43.775 { 00:16:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.775 "dma_device_type": 2 00:16:43.775 } 00:16:43.775 ], 00:16:43.775 "driver_specific": { 00:16:43.775 "raid": { 00:16:43.775 "uuid": "35999195-2028-4867-99c1-40485408d24d", 00:16:43.775 "strip_size_kb": 0, 00:16:43.775 "state": "online", 00:16:43.775 "raid_level": "raid1", 00:16:43.775 "superblock": true, 00:16:43.775 "num_base_bdevs": 2, 00:16:43.775 "num_base_bdevs_discovered": 2, 00:16:43.775 "num_base_bdevs_operational": 2, 00:16:43.775 "base_bdevs_list": [ 00:16:43.775 { 00:16:43.775 "name": "BaseBdev1", 00:16:43.775 "uuid": "f684e121-3d9b-453b-8cce-1b6fbd2c6f7b", 00:16:43.775 "is_configured": true, 00:16:43.775 "data_offset": 256, 00:16:43.775 "data_size": 7936 00:16:43.775 }, 00:16:43.775 { 00:16:43.775 "name": "BaseBdev2", 00:16:43.775 "uuid": "9df88464-4cdc-47a6-9cac-431981e383c4", 00:16:43.775 "is_configured": true, 00:16:43.775 "data_offset": 256, 00:16:43.775 "data_size": 7936 00:16:43.775 } 00:16:43.775 ] 00:16:43.775 } 00:16:43.775 } 00:16:43.775 }' 00:16:43.775 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:44.034 BaseBdev2' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.034 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.034 [2024-11-20 03:23:33.568835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.292 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.292 "name": "Existed_Raid", 00:16:44.292 "uuid": "35999195-2028-4867-99c1-40485408d24d", 00:16:44.292 "strip_size_kb": 0, 00:16:44.292 "state": "online", 00:16:44.292 "raid_level": "raid1", 00:16:44.292 "superblock": true, 00:16:44.292 "num_base_bdevs": 2, 00:16:44.292 "num_base_bdevs_discovered": 1, 00:16:44.292 "num_base_bdevs_operational": 1, 00:16:44.292 "base_bdevs_list": [ 00:16:44.292 { 00:16:44.292 "name": null, 00:16:44.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.292 "is_configured": false, 00:16:44.292 "data_offset": 0, 00:16:44.292 "data_size": 7936 00:16:44.292 }, 00:16:44.292 { 00:16:44.292 "name": "BaseBdev2", 00:16:44.292 "uuid": "9df88464-4cdc-47a6-9cac-431981e383c4", 00:16:44.292 "is_configured": true, 00:16:44.292 "data_offset": 256, 00:16:44.292 "data_size": 7936 00:16:44.292 } 00:16:44.292 ] 00:16:44.292 }' 00:16:44.293 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.293 03:23:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.551 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.809 [2024-11-20 03:23:34.210626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.809 [2024-11-20 03:23:34.210837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.809 [2024-11-20 03:23:34.325563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.809 [2024-11-20 03:23:34.325670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.809 [2024-11-20 03:23:34.325687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85738 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85738 ']' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85738 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85738 00:16:44.809 killing process with pid 85738 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85738' 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85738 00:16:44.809 03:23:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85738 00:16:44.809 [2024-11-20 03:23:34.402555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.809 [2024-11-20 03:23:34.423573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.184 03:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:46.184 ************************************ 00:16:46.184 END TEST raid_state_function_test_sb_4k 00:16:46.184 ************************************ 00:16:46.184 00:16:46.184 real 0m5.399s 00:16:46.184 user 0m7.784s 00:16:46.184 sys 0m0.674s 00:16:46.184 03:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.184 03:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.184 03:23:35 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:46.184 03:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:46.184 03:23:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.184 03:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.443 ************************************ 00:16:46.443 START TEST raid_superblock_test_4k 00:16:46.443 ************************************ 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85991 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85991 00:16:46.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85991 ']' 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.443 03:23:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.443 [2024-11-20 03:23:35.931179] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:46.443 [2024-11-20 03:23:35.931326] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85991 ] 00:16:46.701 [2024-11-20 03:23:36.094509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.701 [2024-11-20 03:23:36.230653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.960 [2024-11-20 03:23:36.470000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.960 [2024-11-20 03:23:36.470078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.218 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.477 malloc1 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.477 [2024-11-20 03:23:36.872538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.477 [2024-11-20 03:23:36.872636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.477 [2024-11-20 03:23:36.872677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.477 [2024-11-20 03:23:36.872690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.477 [2024-11-20 03:23:36.875234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.477 [2024-11-20 03:23:36.875280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.477 pt1 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.477 malloc2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.477 [2024-11-20 03:23:36.930235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.477 [2024-11-20 03:23:36.930311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.477 [2024-11-20 03:23:36.930338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.477 [2024-11-20 03:23:36.930348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.477 [2024-11-20 03:23:36.932833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.477 [2024-11-20 03:23:36.932876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.477 pt2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.477 [2024-11-20 03:23:36.942292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.477 [2024-11-20 03:23:36.944381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.477 [2024-11-20 03:23:36.944593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.477 [2024-11-20 03:23:36.944634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:47.477 [2024-11-20 03:23:36.944947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:47.477 [2024-11-20 03:23:36.945144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.477 [2024-11-20 03:23:36.945164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.477 [2024-11-20 03:23:36.945364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.477 "name": "raid_bdev1", 00:16:47.477 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:47.477 "strip_size_kb": 0, 00:16:47.477 "state": "online", 00:16:47.477 "raid_level": "raid1", 00:16:47.477 "superblock": true, 00:16:47.477 "num_base_bdevs": 2, 00:16:47.477 "num_base_bdevs_discovered": 2, 00:16:47.477 "num_base_bdevs_operational": 2, 00:16:47.477 "base_bdevs_list": [ 00:16:47.477 { 00:16:47.477 "name": "pt1", 00:16:47.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.477 "is_configured": true, 00:16:47.477 "data_offset": 256, 00:16:47.477 "data_size": 7936 00:16:47.477 }, 00:16:47.477 { 00:16:47.477 "name": "pt2", 00:16:47.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.477 "is_configured": true, 00:16:47.477 "data_offset": 256, 00:16:47.477 "data_size": 7936 00:16:47.477 } 00:16:47.477 ] 00:16:47.477 }' 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.477 03:23:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.736 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.736 [2024-11-20 03:23:37.365829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.995 "name": "raid_bdev1", 00:16:47.995 "aliases": [ 00:16:47.995 "b3606699-4c58-4397-a9da-ee72cf79f78a" 00:16:47.995 ], 00:16:47.995 "product_name": "Raid Volume", 00:16:47.995 "block_size": 4096, 00:16:47.995 "num_blocks": 7936, 00:16:47.995 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:47.995 "assigned_rate_limits": { 00:16:47.995 "rw_ios_per_sec": 0, 00:16:47.995 "rw_mbytes_per_sec": 0, 00:16:47.995 "r_mbytes_per_sec": 0, 00:16:47.995 "w_mbytes_per_sec": 0 00:16:47.995 }, 00:16:47.995 "claimed": false, 00:16:47.995 "zoned": false, 00:16:47.995 "supported_io_types": { 00:16:47.995 "read": true, 00:16:47.995 "write": true, 00:16:47.995 "unmap": false, 00:16:47.995 "flush": false, 00:16:47.995 "reset": true, 00:16:47.995 "nvme_admin": false, 00:16:47.995 "nvme_io": false, 00:16:47.995 "nvme_io_md": false, 00:16:47.995 "write_zeroes": true, 00:16:47.995 "zcopy": false, 00:16:47.995 "get_zone_info": false, 00:16:47.995 "zone_management": false, 00:16:47.995 "zone_append": false, 00:16:47.995 "compare": false, 00:16:47.995 "compare_and_write": false, 00:16:47.995 "abort": false, 00:16:47.995 "seek_hole": false, 00:16:47.995 "seek_data": false, 00:16:47.995 "copy": false, 00:16:47.995 "nvme_iov_md": false 00:16:47.995 }, 00:16:47.995 "memory_domains": [ 00:16:47.995 { 00:16:47.995 "dma_device_id": "system", 00:16:47.995 "dma_device_type": 1 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.995 "dma_device_type": 2 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "dma_device_id": "system", 00:16:47.995 "dma_device_type": 1 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.995 "dma_device_type": 2 00:16:47.995 } 00:16:47.995 ], 00:16:47.995 "driver_specific": { 00:16:47.995 "raid": { 00:16:47.995 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:47.995 "strip_size_kb": 0, 00:16:47.995 "state": "online", 00:16:47.995 "raid_level": "raid1", 00:16:47.995 "superblock": true, 00:16:47.995 "num_base_bdevs": 2, 00:16:47.995 "num_base_bdevs_discovered": 2, 00:16:47.995 "num_base_bdevs_operational": 2, 00:16:47.995 "base_bdevs_list": [ 00:16:47.995 { 00:16:47.995 "name": "pt1", 00:16:47.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.995 "is_configured": true, 00:16:47.995 "data_offset": 256, 00:16:47.995 "data_size": 7936 00:16:47.995 }, 00:16:47.995 { 00:16:47.995 "name": "pt2", 00:16:47.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.995 "is_configured": true, 00:16:47.995 "data_offset": 256, 00:16:47.995 "data_size": 7936 00:16:47.995 } 00:16:47.995 ] 00:16:47.995 } 00:16:47.995 } 00:16:47.995 }' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:47.995 pt2' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:47.995 [2024-11-20 03:23:37.609321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.995 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3606699-4c58-4397-a9da-ee72cf79f78a 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b3606699-4c58-4397-a9da-ee72cf79f78a ']' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 [2024-11-20 03:23:37.656979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.256 [2024-11-20 03:23:37.657008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.256 [2024-11-20 03:23:37.657111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.256 [2024-11-20 03:23:37.657171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.256 [2024-11-20 03:23:37.657187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 [2024-11-20 03:23:37.784817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:48.256 [2024-11-20 03:23:37.786944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:48.256 [2024-11-20 03:23:37.787069] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:48.256 [2024-11-20 03:23:37.787174] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:48.256 [2024-11-20 03:23:37.787232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.256 [2024-11-20 03:23:37.787268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:48.256 request: 00:16:48.256 { 00:16:48.256 "name": "raid_bdev1", 00:16:48.256 "raid_level": "raid1", 00:16:48.256 "base_bdevs": [ 00:16:48.256 "malloc1", 00:16:48.256 "malloc2" 00:16:48.256 ], 00:16:48.256 "superblock": false, 00:16:48.256 "method": "bdev_raid_create", 00:16:48.256 "req_id": 1 00:16:48.256 } 00:16:48.256 Got JSON-RPC error response 00:16:48.256 response: 00:16:48.256 { 00:16:48.256 "code": -17, 00:16:48.256 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:48.256 } 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 [2024-11-20 03:23:37.848704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.256 [2024-11-20 03:23:37.848820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.256 [2024-11-20 03:23:37.848866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:48.256 [2024-11-20 03:23:37.848900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.256 [2024-11-20 03:23:37.851330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.256 [2024-11-20 03:23:37.851412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.256 [2024-11-20 03:23:37.851538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:48.256 [2024-11-20 03:23:37.851673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.256 pt1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.516 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.516 "name": "raid_bdev1", 00:16:48.516 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:48.516 "strip_size_kb": 0, 00:16:48.516 "state": "configuring", 00:16:48.516 "raid_level": "raid1", 00:16:48.516 "superblock": true, 00:16:48.516 "num_base_bdevs": 2, 00:16:48.516 "num_base_bdevs_discovered": 1, 00:16:48.516 "num_base_bdevs_operational": 2, 00:16:48.516 "base_bdevs_list": [ 00:16:48.516 { 00:16:48.516 "name": "pt1", 00:16:48.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.516 "is_configured": true, 00:16:48.516 "data_offset": 256, 00:16:48.516 "data_size": 7936 00:16:48.516 }, 00:16:48.516 { 00:16:48.516 "name": null, 00:16:48.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.516 "is_configured": false, 00:16:48.516 "data_offset": 256, 00:16:48.516 "data_size": 7936 00:16:48.516 } 00:16:48.516 ] 00:16:48.516 }' 00:16:48.516 03:23:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.516 03:23:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.776 [2024-11-20 03:23:38.283985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.776 [2024-11-20 03:23:38.284126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.776 [2024-11-20 03:23:38.284154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:48.776 [2024-11-20 03:23:38.284184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.776 [2024-11-20 03:23:38.284727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.776 [2024-11-20 03:23:38.284752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.776 [2024-11-20 03:23:38.284843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:48.776 [2024-11-20 03:23:38.284872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.776 [2024-11-20 03:23:38.285009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:48.776 [2024-11-20 03:23:38.285029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:48.776 [2024-11-20 03:23:38.285296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:48.776 [2024-11-20 03:23:38.285474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:48.776 [2024-11-20 03:23:38.285485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:48.776 [2024-11-20 03:23:38.285658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.776 pt2 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.776 "name": "raid_bdev1", 00:16:48.776 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:48.776 "strip_size_kb": 0, 00:16:48.776 "state": "online", 00:16:48.776 "raid_level": "raid1", 00:16:48.776 "superblock": true, 00:16:48.776 "num_base_bdevs": 2, 00:16:48.776 "num_base_bdevs_discovered": 2, 00:16:48.776 "num_base_bdevs_operational": 2, 00:16:48.776 "base_bdevs_list": [ 00:16:48.776 { 00:16:48.776 "name": "pt1", 00:16:48.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.776 "is_configured": true, 00:16:48.776 "data_offset": 256, 00:16:48.776 "data_size": 7936 00:16:48.776 }, 00:16:48.776 { 00:16:48.776 "name": "pt2", 00:16:48.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.776 "is_configured": true, 00:16:48.776 "data_offset": 256, 00:16:48.776 "data_size": 7936 00:16:48.776 } 00:16:48.776 ] 00:16:48.776 }' 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.776 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.343 [2024-11-20 03:23:38.751454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.343 "name": "raid_bdev1", 00:16:49.343 "aliases": [ 00:16:49.343 "b3606699-4c58-4397-a9da-ee72cf79f78a" 00:16:49.343 ], 00:16:49.343 "product_name": "Raid Volume", 00:16:49.343 "block_size": 4096, 00:16:49.343 "num_blocks": 7936, 00:16:49.343 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:49.343 "assigned_rate_limits": { 00:16:49.343 "rw_ios_per_sec": 0, 00:16:49.343 "rw_mbytes_per_sec": 0, 00:16:49.343 "r_mbytes_per_sec": 0, 00:16:49.343 "w_mbytes_per_sec": 0 00:16:49.343 }, 00:16:49.343 "claimed": false, 00:16:49.343 "zoned": false, 00:16:49.343 "supported_io_types": { 00:16:49.343 "read": true, 00:16:49.343 "write": true, 00:16:49.343 "unmap": false, 00:16:49.343 "flush": false, 00:16:49.343 "reset": true, 00:16:49.343 "nvme_admin": false, 00:16:49.343 "nvme_io": false, 00:16:49.343 "nvme_io_md": false, 00:16:49.343 "write_zeroes": true, 00:16:49.343 "zcopy": false, 00:16:49.343 "get_zone_info": false, 00:16:49.343 "zone_management": false, 00:16:49.343 "zone_append": false, 00:16:49.343 "compare": false, 00:16:49.343 "compare_and_write": false, 00:16:49.343 "abort": false, 00:16:49.343 "seek_hole": false, 00:16:49.343 "seek_data": false, 00:16:49.343 "copy": false, 00:16:49.343 "nvme_iov_md": false 00:16:49.343 }, 00:16:49.343 "memory_domains": [ 00:16:49.343 { 00:16:49.343 "dma_device_id": "system", 00:16:49.343 "dma_device_type": 1 00:16:49.343 }, 00:16:49.343 { 00:16:49.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.343 "dma_device_type": 2 00:16:49.343 }, 00:16:49.343 { 00:16:49.343 "dma_device_id": "system", 00:16:49.343 "dma_device_type": 1 00:16:49.343 }, 00:16:49.343 { 00:16:49.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.343 "dma_device_type": 2 00:16:49.343 } 00:16:49.343 ], 00:16:49.343 "driver_specific": { 00:16:49.343 "raid": { 00:16:49.343 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:49.343 "strip_size_kb": 0, 00:16:49.343 "state": "online", 00:16:49.343 "raid_level": "raid1", 00:16:49.343 "superblock": true, 00:16:49.343 "num_base_bdevs": 2, 00:16:49.343 "num_base_bdevs_discovered": 2, 00:16:49.343 "num_base_bdevs_operational": 2, 00:16:49.343 "base_bdevs_list": [ 00:16:49.343 { 00:16:49.343 "name": "pt1", 00:16:49.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.343 "is_configured": true, 00:16:49.343 "data_offset": 256, 00:16:49.343 "data_size": 7936 00:16:49.343 }, 00:16:49.343 { 00:16:49.343 "name": "pt2", 00:16:49.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.343 "is_configured": true, 00:16:49.343 "data_offset": 256, 00:16:49.343 "data_size": 7936 00:16:49.343 } 00:16:49.343 ] 00:16:49.343 } 00:16:49.343 } 00:16:49.343 }' 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:49.343 pt2' 00:16:49.343 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.344 [2024-11-20 03:23:38.947126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b3606699-4c58-4397-a9da-ee72cf79f78a '!=' b3606699-4c58-4397-a9da-ee72cf79f78a ']' 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.344 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.344 [2024-11-20 03:23:38.974851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.602 03:23:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.602 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.602 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.602 "name": "raid_bdev1", 00:16:49.602 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:49.602 "strip_size_kb": 0, 00:16:49.602 "state": "online", 00:16:49.602 "raid_level": "raid1", 00:16:49.603 "superblock": true, 00:16:49.603 "num_base_bdevs": 2, 00:16:49.603 "num_base_bdevs_discovered": 1, 00:16:49.603 "num_base_bdevs_operational": 1, 00:16:49.603 "base_bdevs_list": [ 00:16:49.603 { 00:16:49.603 "name": null, 00:16:49.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.603 "is_configured": false, 00:16:49.603 "data_offset": 0, 00:16:49.603 "data_size": 7936 00:16:49.603 }, 00:16:49.603 { 00:16:49.603 "name": "pt2", 00:16:49.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.603 "is_configured": true, 00:16:49.603 "data_offset": 256, 00:16:49.603 "data_size": 7936 00:16:49.603 } 00:16:49.603 ] 00:16:49.603 }' 00:16:49.603 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.603 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.862 [2024-11-20 03:23:39.402124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.862 [2024-11-20 03:23:39.402205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.862 [2024-11-20 03:23:39.402331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.862 [2024-11-20 03:23:39.402409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.862 [2024-11-20 03:23:39.402491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.862 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.862 [2024-11-20 03:23:39.470013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.862 [2024-11-20 03:23:39.470089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.862 [2024-11-20 03:23:39.470109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:49.863 [2024-11-20 03:23:39.470121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.863 [2024-11-20 03:23:39.472519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.863 pt2 00:16:49.863 [2024-11-20 03:23:39.472603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.863 [2024-11-20 03:23:39.472740] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.863 [2024-11-20 03:23:39.472795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.863 [2024-11-20 03:23:39.472923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:49.863 [2024-11-20 03:23:39.472937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:49.863 [2024-11-20 03:23:39.473188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.863 [2024-11-20 03:23:39.473363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:49.863 [2024-11-20 03:23:39.473374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:49.863 [2024-11-20 03:23:39.473540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.863 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.123 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.123 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.123 "name": "raid_bdev1", 00:16:50.123 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:50.123 "strip_size_kb": 0, 00:16:50.123 "state": "online", 00:16:50.123 "raid_level": "raid1", 00:16:50.123 "superblock": true, 00:16:50.123 "num_base_bdevs": 2, 00:16:50.123 "num_base_bdevs_discovered": 1, 00:16:50.123 "num_base_bdevs_operational": 1, 00:16:50.123 "base_bdevs_list": [ 00:16:50.123 { 00:16:50.123 "name": null, 00:16:50.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.123 "is_configured": false, 00:16:50.123 "data_offset": 256, 00:16:50.123 "data_size": 7936 00:16:50.123 }, 00:16:50.123 { 00:16:50.123 "name": "pt2", 00:16:50.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.123 "is_configured": true, 00:16:50.123 "data_offset": 256, 00:16:50.123 "data_size": 7936 00:16:50.123 } 00:16:50.123 ] 00:16:50.123 }' 00:16:50.123 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.123 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.382 [2024-11-20 03:23:39.941162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.382 [2024-11-20 03:23:39.941264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.382 [2024-11-20 03:23:39.941374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.382 [2024-11-20 03:23:39.941457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.382 [2024-11-20 03:23:39.941505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.382 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.383 03:23:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.383 [2024-11-20 03:23:40.001085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.383 [2024-11-20 03:23:40.001153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.383 [2024-11-20 03:23:40.001190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:50.383 [2024-11-20 03:23:40.001200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.383 [2024-11-20 03:23:40.003636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.383 [2024-11-20 03:23:40.003676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.383 [2024-11-20 03:23:40.003774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:50.383 [2024-11-20 03:23:40.003825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.383 [2024-11-20 03:23:40.003972] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:50.383 [2024-11-20 03:23:40.003984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.383 [2024-11-20 03:23:40.004001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:50.383 [2024-11-20 03:23:40.004076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.383 [2024-11-20 03:23:40.004168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:50.383 [2024-11-20 03:23:40.004177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:50.383 [2024-11-20 03:23:40.004450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.383 [2024-11-20 03:23:40.004607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:50.383 [2024-11-20 03:23:40.004643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:50.383 pt1 00:16:50.383 [2024-11-20 03:23:40.004811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.383 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.642 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.642 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.642 "name": "raid_bdev1", 00:16:50.642 "uuid": "b3606699-4c58-4397-a9da-ee72cf79f78a", 00:16:50.642 "strip_size_kb": 0, 00:16:50.642 "state": "online", 00:16:50.642 "raid_level": "raid1", 00:16:50.642 "superblock": true, 00:16:50.642 "num_base_bdevs": 2, 00:16:50.642 "num_base_bdevs_discovered": 1, 00:16:50.642 "num_base_bdevs_operational": 1, 00:16:50.642 "base_bdevs_list": [ 00:16:50.642 { 00:16:50.642 "name": null, 00:16:50.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.642 "is_configured": false, 00:16:50.642 "data_offset": 256, 00:16:50.642 "data_size": 7936 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "name": "pt2", 00:16:50.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.642 "is_configured": true, 00:16:50.642 "data_offset": 256, 00:16:50.642 "data_size": 7936 00:16:50.642 } 00:16:50.642 ] 00:16:50.642 }' 00:16:50.642 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.642 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.902 [2024-11-20 03:23:40.496474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b3606699-4c58-4397-a9da-ee72cf79f78a '!=' b3606699-4c58-4397-a9da-ee72cf79f78a ']' 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85991 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85991 ']' 00:16:50.902 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85991 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85991 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85991' 00:16:51.177 killing process with pid 85991 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85991 00:16:51.177 [2024-11-20 03:23:40.571144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.177 03:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85991 00:16:51.177 [2024-11-20 03:23:40.571313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.177 [2024-11-20 03:23:40.571395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.177 [2024-11-20 03:23:40.571450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:51.177 [2024-11-20 03:23:40.787201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.631 03:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:52.631 00:16:52.631 real 0m6.091s 00:16:52.631 user 0m9.215s 00:16:52.631 sys 0m1.028s 00:16:52.631 03:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.631 03:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 ************************************ 00:16:52.631 END TEST raid_superblock_test_4k 00:16:52.631 ************************************ 00:16:52.631 03:23:41 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:52.631 03:23:41 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:52.631 03:23:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:52.631 03:23:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.631 03:23:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 ************************************ 00:16:52.631 START TEST raid_rebuild_test_sb_4k 00:16:52.631 ************************************ 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:52.631 03:23:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86318 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86318 00:16:52.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86318 ']' 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.631 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:52.631 Zero copy mechanism will not be used. 00:16:52.631 [2024-11-20 03:23:42.090509] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:52.631 [2024-11-20 03:23:42.090643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86318 ] 00:16:52.891 [2024-11-20 03:23:42.265256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.891 [2024-11-20 03:23:42.383808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.150 [2024-11-20 03:23:42.587808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.150 [2024-11-20 03:23:42.587968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.409 BaseBdev1_malloc 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.409 [2024-11-20 03:23:42.993923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:53.409 [2024-11-20 03:23:42.994047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.409 [2024-11-20 03:23:42.994077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:53.409 [2024-11-20 03:23:42.994090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.409 [2024-11-20 03:23:42.996456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.409 [2024-11-20 03:23:42.996497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:53.409 BaseBdev1 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.409 03:23:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.669 BaseBdev2_malloc 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.669 [2024-11-20 03:23:43.049551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:53.669 [2024-11-20 03:23:43.049636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.669 [2024-11-20 03:23:43.049658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:53.669 [2024-11-20 03:23:43.049669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.669 [2024-11-20 03:23:43.051905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.669 [2024-11-20 03:23:43.051990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:53.669 BaseBdev2 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.669 spare_malloc 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.669 spare_delay 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.669 [2024-11-20 03:23:43.134516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.669 [2024-11-20 03:23:43.134581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.669 [2024-11-20 03:23:43.134621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:53.669 [2024-11-20 03:23:43.134645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.669 [2024-11-20 03:23:43.137025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.669 [2024-11-20 03:23:43.137067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.669 spare 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:53.669 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.670 [2024-11-20 03:23:43.146583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.670 [2024-11-20 03:23:43.148497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.670 [2024-11-20 03:23:43.148793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:53.670 [2024-11-20 03:23:43.148818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:53.670 [2024-11-20 03:23:43.149118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:53.670 [2024-11-20 03:23:43.149316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:53.670 [2024-11-20 03:23:43.149326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:53.670 [2024-11-20 03:23:43.149513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.670 "name": "raid_bdev1", 00:16:53.670 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:53.670 "strip_size_kb": 0, 00:16:53.670 "state": "online", 00:16:53.670 "raid_level": "raid1", 00:16:53.670 "superblock": true, 00:16:53.670 "num_base_bdevs": 2, 00:16:53.670 "num_base_bdevs_discovered": 2, 00:16:53.670 "num_base_bdevs_operational": 2, 00:16:53.670 "base_bdevs_list": [ 00:16:53.670 { 00:16:53.670 "name": "BaseBdev1", 00:16:53.670 "uuid": "7c425efd-e8b8-5a55-b233-879e1e49d4c2", 00:16:53.670 "is_configured": true, 00:16:53.670 "data_offset": 256, 00:16:53.670 "data_size": 7936 00:16:53.670 }, 00:16:53.670 { 00:16:53.670 "name": "BaseBdev2", 00:16:53.670 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:53.670 "is_configured": true, 00:16:53.670 "data_offset": 256, 00:16:53.670 "data_size": 7936 00:16:53.670 } 00:16:53.670 ] 00:16:53.670 }' 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.670 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.238 [2024-11-20 03:23:43.578109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.238 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:54.238 [2024-11-20 03:23:43.841414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:54.238 /dev/nbd0 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.498 1+0 records in 00:16:54.498 1+0 records out 00:16:54.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231793 s, 17.7 MB/s 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:54.498 03:23:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:55.068 7936+0 records in 00:16:55.068 7936+0 records out 00:16:55.068 32505856 bytes (33 MB, 31 MiB) copied, 0.629081 s, 51.7 MB/s 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.068 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:55.328 [2024-11-20 03:23:44.746914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.328 [2024-11-20 03:23:44.761512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.328 "name": "raid_bdev1", 00:16:55.328 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:55.328 "strip_size_kb": 0, 00:16:55.328 "state": "online", 00:16:55.328 "raid_level": "raid1", 00:16:55.328 "superblock": true, 00:16:55.328 "num_base_bdevs": 2, 00:16:55.328 "num_base_bdevs_discovered": 1, 00:16:55.328 "num_base_bdevs_operational": 1, 00:16:55.328 "base_bdevs_list": [ 00:16:55.328 { 00:16:55.328 "name": null, 00:16:55.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.328 "is_configured": false, 00:16:55.328 "data_offset": 0, 00:16:55.328 "data_size": 7936 00:16:55.328 }, 00:16:55.328 { 00:16:55.328 "name": "BaseBdev2", 00:16:55.328 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:55.328 "is_configured": true, 00:16:55.328 "data_offset": 256, 00:16:55.328 "data_size": 7936 00:16:55.328 } 00:16:55.328 ] 00:16:55.328 }' 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.328 03:23:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.588 03:23:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.588 03:23:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.588 03:23:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.588 [2024-11-20 03:23:45.176781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.588 [2024-11-20 03:23:45.193175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:16:55.588 03:23:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.588 03:23:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:55.588 [2024-11-20 03:23:45.194947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.008 "name": "raid_bdev1", 00:16:57.008 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:57.008 "strip_size_kb": 0, 00:16:57.008 "state": "online", 00:16:57.008 "raid_level": "raid1", 00:16:57.008 "superblock": true, 00:16:57.008 "num_base_bdevs": 2, 00:16:57.008 "num_base_bdevs_discovered": 2, 00:16:57.008 "num_base_bdevs_operational": 2, 00:16:57.008 "process": { 00:16:57.008 "type": "rebuild", 00:16:57.008 "target": "spare", 00:16:57.008 "progress": { 00:16:57.008 "blocks": 2560, 00:16:57.008 "percent": 32 00:16:57.008 } 00:16:57.008 }, 00:16:57.008 "base_bdevs_list": [ 00:16:57.008 { 00:16:57.008 "name": "spare", 00:16:57.008 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:16:57.008 "is_configured": true, 00:16:57.008 "data_offset": 256, 00:16:57.008 "data_size": 7936 00:16:57.008 }, 00:16:57.008 { 00:16:57.008 "name": "BaseBdev2", 00:16:57.008 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:57.008 "is_configured": true, 00:16:57.008 "data_offset": 256, 00:16:57.008 "data_size": 7936 00:16:57.008 } 00:16:57.008 ] 00:16:57.008 }' 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 [2024-11-20 03:23:46.354924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.008 [2024-11-20 03:23:46.399916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.008 [2024-11-20 03:23:46.399977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.008 [2024-11-20 03:23:46.399992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.008 [2024-11-20 03:23:46.400001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.008 "name": "raid_bdev1", 00:16:57.008 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:57.008 "strip_size_kb": 0, 00:16:57.008 "state": "online", 00:16:57.008 "raid_level": "raid1", 00:16:57.008 "superblock": true, 00:16:57.008 "num_base_bdevs": 2, 00:16:57.008 "num_base_bdevs_discovered": 1, 00:16:57.008 "num_base_bdevs_operational": 1, 00:16:57.008 "base_bdevs_list": [ 00:16:57.008 { 00:16:57.008 "name": null, 00:16:57.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.008 "is_configured": false, 00:16:57.008 "data_offset": 0, 00:16:57.008 "data_size": 7936 00:16:57.008 }, 00:16:57.008 { 00:16:57.008 "name": "BaseBdev2", 00:16:57.008 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:57.008 "is_configured": true, 00:16:57.008 "data_offset": 256, 00:16:57.008 "data_size": 7936 00:16:57.008 } 00:16:57.008 ] 00:16:57.008 }' 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.008 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.276 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.545 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.545 "name": "raid_bdev1", 00:16:57.545 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:57.545 "strip_size_kb": 0, 00:16:57.545 "state": "online", 00:16:57.545 "raid_level": "raid1", 00:16:57.545 "superblock": true, 00:16:57.545 "num_base_bdevs": 2, 00:16:57.545 "num_base_bdevs_discovered": 1, 00:16:57.545 "num_base_bdevs_operational": 1, 00:16:57.545 "base_bdevs_list": [ 00:16:57.545 { 00:16:57.545 "name": null, 00:16:57.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.545 "is_configured": false, 00:16:57.545 "data_offset": 0, 00:16:57.545 "data_size": 7936 00:16:57.545 }, 00:16:57.545 { 00:16:57.545 "name": "BaseBdev2", 00:16:57.545 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:57.545 "is_configured": true, 00:16:57.545 "data_offset": 256, 00:16:57.545 "data_size": 7936 00:16:57.545 } 00:16:57.545 ] 00:16:57.545 }' 00:16:57.545 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.545 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.545 03:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.545 03:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.545 03:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:57.545 03:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.545 03:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.545 [2024-11-20 03:23:47.034449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.545 [2024-11-20 03:23:47.050607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:16:57.545 03:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.545 03:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:57.545 [2024-11-20 03:23:47.052446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.520 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.520 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.520 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.520 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.521 "name": "raid_bdev1", 00:16:58.521 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:58.521 "strip_size_kb": 0, 00:16:58.521 "state": "online", 00:16:58.521 "raid_level": "raid1", 00:16:58.521 "superblock": true, 00:16:58.521 "num_base_bdevs": 2, 00:16:58.521 "num_base_bdevs_discovered": 2, 00:16:58.521 "num_base_bdevs_operational": 2, 00:16:58.521 "process": { 00:16:58.521 "type": "rebuild", 00:16:58.521 "target": "spare", 00:16:58.521 "progress": { 00:16:58.521 "blocks": 2560, 00:16:58.521 "percent": 32 00:16:58.521 } 00:16:58.521 }, 00:16:58.521 "base_bdevs_list": [ 00:16:58.521 { 00:16:58.521 "name": "spare", 00:16:58.521 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:16:58.521 "is_configured": true, 00:16:58.521 "data_offset": 256, 00:16:58.521 "data_size": 7936 00:16:58.521 }, 00:16:58.521 { 00:16:58.521 "name": "BaseBdev2", 00:16:58.521 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:58.521 "is_configured": true, 00:16:58.521 "data_offset": 256, 00:16:58.521 "data_size": 7936 00:16:58.521 } 00:16:58.521 ] 00:16:58.521 }' 00:16:58.521 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:58.794 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=672 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.794 "name": "raid_bdev1", 00:16:58.794 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:58.794 "strip_size_kb": 0, 00:16:58.794 "state": "online", 00:16:58.794 "raid_level": "raid1", 00:16:58.794 "superblock": true, 00:16:58.794 "num_base_bdevs": 2, 00:16:58.794 "num_base_bdevs_discovered": 2, 00:16:58.794 "num_base_bdevs_operational": 2, 00:16:58.794 "process": { 00:16:58.794 "type": "rebuild", 00:16:58.794 "target": "spare", 00:16:58.794 "progress": { 00:16:58.794 "blocks": 2816, 00:16:58.794 "percent": 35 00:16:58.794 } 00:16:58.794 }, 00:16:58.794 "base_bdevs_list": [ 00:16:58.794 { 00:16:58.794 "name": "spare", 00:16:58.794 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:16:58.794 "is_configured": true, 00:16:58.794 "data_offset": 256, 00:16:58.794 "data_size": 7936 00:16:58.794 }, 00:16:58.794 { 00:16:58.794 "name": "BaseBdev2", 00:16:58.794 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:58.794 "is_configured": true, 00:16:58.794 "data_offset": 256, 00:16:58.794 "data_size": 7936 00:16:58.794 } 00:16:58.794 ] 00:16:58.794 }' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.794 03:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.734 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.734 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.734 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.734 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.734 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.734 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.993 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.993 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.994 "name": "raid_bdev1", 00:16:59.994 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:16:59.994 "strip_size_kb": 0, 00:16:59.994 "state": "online", 00:16:59.994 "raid_level": "raid1", 00:16:59.994 "superblock": true, 00:16:59.994 "num_base_bdevs": 2, 00:16:59.994 "num_base_bdevs_discovered": 2, 00:16:59.994 "num_base_bdevs_operational": 2, 00:16:59.994 "process": { 00:16:59.994 "type": "rebuild", 00:16:59.994 "target": "spare", 00:16:59.994 "progress": { 00:16:59.994 "blocks": 5888, 00:16:59.994 "percent": 74 00:16:59.994 } 00:16:59.994 }, 00:16:59.994 "base_bdevs_list": [ 00:16:59.994 { 00:16:59.994 "name": "spare", 00:16:59.994 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:16:59.994 "is_configured": true, 00:16:59.994 "data_offset": 256, 00:16:59.994 "data_size": 7936 00:16:59.994 }, 00:16:59.994 { 00:16:59.994 "name": "BaseBdev2", 00:16:59.994 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:16:59.994 "is_configured": true, 00:16:59.994 "data_offset": 256, 00:16:59.994 "data_size": 7936 00:16:59.994 } 00:16:59.994 ] 00:16:59.994 }' 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.994 03:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.564 [2024-11-20 03:23:50.164610] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:00.564 [2024-11-20 03:23:50.164732] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:00.564 [2024-11-20 03:23:50.164864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.134 "name": "raid_bdev1", 00:17:01.134 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:01.134 "strip_size_kb": 0, 00:17:01.134 "state": "online", 00:17:01.134 "raid_level": "raid1", 00:17:01.134 "superblock": true, 00:17:01.134 "num_base_bdevs": 2, 00:17:01.134 "num_base_bdevs_discovered": 2, 00:17:01.134 "num_base_bdevs_operational": 2, 00:17:01.134 "base_bdevs_list": [ 00:17:01.134 { 00:17:01.134 "name": "spare", 00:17:01.134 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:01.134 "is_configured": true, 00:17:01.134 "data_offset": 256, 00:17:01.134 "data_size": 7936 00:17:01.134 }, 00:17:01.134 { 00:17:01.134 "name": "BaseBdev2", 00:17:01.134 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:01.134 "is_configured": true, 00:17:01.134 "data_offset": 256, 00:17:01.134 "data_size": 7936 00:17:01.134 } 00:17:01.134 ] 00:17:01.134 }' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.134 "name": "raid_bdev1", 00:17:01.134 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:01.134 "strip_size_kb": 0, 00:17:01.134 "state": "online", 00:17:01.134 "raid_level": "raid1", 00:17:01.134 "superblock": true, 00:17:01.134 "num_base_bdevs": 2, 00:17:01.134 "num_base_bdevs_discovered": 2, 00:17:01.134 "num_base_bdevs_operational": 2, 00:17:01.134 "base_bdevs_list": [ 00:17:01.134 { 00:17:01.134 "name": "spare", 00:17:01.134 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:01.134 "is_configured": true, 00:17:01.134 "data_offset": 256, 00:17:01.134 "data_size": 7936 00:17:01.134 }, 00:17:01.134 { 00:17:01.134 "name": "BaseBdev2", 00:17:01.134 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:01.134 "is_configured": true, 00:17:01.134 "data_offset": 256, 00:17:01.134 "data_size": 7936 00:17:01.134 } 00:17:01.134 ] 00:17:01.134 }' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.134 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.394 "name": "raid_bdev1", 00:17:01.394 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:01.394 "strip_size_kb": 0, 00:17:01.394 "state": "online", 00:17:01.394 "raid_level": "raid1", 00:17:01.394 "superblock": true, 00:17:01.394 "num_base_bdevs": 2, 00:17:01.394 "num_base_bdevs_discovered": 2, 00:17:01.394 "num_base_bdevs_operational": 2, 00:17:01.394 "base_bdevs_list": [ 00:17:01.394 { 00:17:01.394 "name": "spare", 00:17:01.394 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:01.394 "is_configured": true, 00:17:01.394 "data_offset": 256, 00:17:01.394 "data_size": 7936 00:17:01.394 }, 00:17:01.394 { 00:17:01.394 "name": "BaseBdev2", 00:17:01.394 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:01.394 "is_configured": true, 00:17:01.394 "data_offset": 256, 00:17:01.394 "data_size": 7936 00:17:01.394 } 00:17:01.394 ] 00:17:01.394 }' 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.394 03:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.653 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.653 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.653 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.653 [2024-11-20 03:23:51.276585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.653 [2024-11-20 03:23:51.276672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.653 [2024-11-20 03:23:51.276801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.653 [2024-11-20 03:23:51.276893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.653 [2024-11-20 03:23:51.276960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:01.653 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.653 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.913 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:01.913 /dev/nbd0 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.172 1+0 records in 00:17:02.172 1+0 records out 00:17:02.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647943 s, 6.3 MB/s 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:02.172 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.173 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.173 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:02.173 /dev/nbd1 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.432 1+0 records in 00:17:02.432 1+0 records out 00:17:02.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400904 s, 10.2 MB/s 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.432 03:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.432 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.691 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.691 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.691 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.691 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.691 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.692 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.692 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:02.692 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.692 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.692 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.952 [2024-11-20 03:23:52.451745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.952 [2024-11-20 03:23:52.451816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.952 [2024-11-20 03:23:52.451843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:02.952 [2024-11-20 03:23:52.451854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.952 [2024-11-20 03:23:52.454423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.952 [2024-11-20 03:23:52.454468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.952 [2024-11-20 03:23:52.454584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:02.952 [2024-11-20 03:23:52.454663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.952 [2024-11-20 03:23:52.454867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.952 spare 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.952 [2024-11-20 03:23:52.554795] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:02.952 [2024-11-20 03:23:52.554831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:02.952 [2024-11-20 03:23:52.555120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:02.952 [2024-11-20 03:23:52.555334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:02.952 [2024-11-20 03:23:52.555352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:02.952 [2024-11-20 03:23:52.555535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.952 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.212 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.212 "name": "raid_bdev1", 00:17:03.212 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:03.212 "strip_size_kb": 0, 00:17:03.212 "state": "online", 00:17:03.212 "raid_level": "raid1", 00:17:03.212 "superblock": true, 00:17:03.212 "num_base_bdevs": 2, 00:17:03.212 "num_base_bdevs_discovered": 2, 00:17:03.212 "num_base_bdevs_operational": 2, 00:17:03.212 "base_bdevs_list": [ 00:17:03.212 { 00:17:03.212 "name": "spare", 00:17:03.212 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:03.212 "is_configured": true, 00:17:03.212 "data_offset": 256, 00:17:03.212 "data_size": 7936 00:17:03.212 }, 00:17:03.212 { 00:17:03.212 "name": "BaseBdev2", 00:17:03.212 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:03.212 "is_configured": true, 00:17:03.212 "data_offset": 256, 00:17:03.212 "data_size": 7936 00:17:03.212 } 00:17:03.212 ] 00:17:03.212 }' 00:17:03.212 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.212 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.471 03:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.471 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.471 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.471 "name": "raid_bdev1", 00:17:03.471 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:03.471 "strip_size_kb": 0, 00:17:03.471 "state": "online", 00:17:03.471 "raid_level": "raid1", 00:17:03.471 "superblock": true, 00:17:03.471 "num_base_bdevs": 2, 00:17:03.471 "num_base_bdevs_discovered": 2, 00:17:03.471 "num_base_bdevs_operational": 2, 00:17:03.471 "base_bdevs_list": [ 00:17:03.471 { 00:17:03.471 "name": "spare", 00:17:03.471 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:03.471 "is_configured": true, 00:17:03.471 "data_offset": 256, 00:17:03.471 "data_size": 7936 00:17:03.471 }, 00:17:03.471 { 00:17:03.471 "name": "BaseBdev2", 00:17:03.471 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:03.471 "is_configured": true, 00:17:03.471 "data_offset": 256, 00:17:03.471 "data_size": 7936 00:17:03.471 } 00:17:03.471 ] 00:17:03.471 }' 00:17:03.471 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.472 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.472 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.731 [2024-11-20 03:23:53.154733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.731 "name": "raid_bdev1", 00:17:03.731 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:03.731 "strip_size_kb": 0, 00:17:03.731 "state": "online", 00:17:03.731 "raid_level": "raid1", 00:17:03.731 "superblock": true, 00:17:03.731 "num_base_bdevs": 2, 00:17:03.731 "num_base_bdevs_discovered": 1, 00:17:03.731 "num_base_bdevs_operational": 1, 00:17:03.731 "base_bdevs_list": [ 00:17:03.731 { 00:17:03.731 "name": null, 00:17:03.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.731 "is_configured": false, 00:17:03.731 "data_offset": 0, 00:17:03.731 "data_size": 7936 00:17:03.731 }, 00:17:03.731 { 00:17:03.731 "name": "BaseBdev2", 00:17:03.731 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:03.731 "is_configured": true, 00:17:03.731 "data_offset": 256, 00:17:03.731 "data_size": 7936 00:17:03.731 } 00:17:03.731 ] 00:17:03.731 }' 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.731 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.991 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.991 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.250 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.250 [2024-11-20 03:23:53.630651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.250 [2024-11-20 03:23:53.630794] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:04.250 [2024-11-20 03:23:53.630818] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:04.250 [2024-11-20 03:23:53.630859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.250 [2024-11-20 03:23:53.648146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:04.250 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.250 03:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:04.250 [2024-11-20 03:23:53.650166] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.189 "name": "raid_bdev1", 00:17:05.189 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:05.189 "strip_size_kb": 0, 00:17:05.189 "state": "online", 00:17:05.189 "raid_level": "raid1", 00:17:05.189 "superblock": true, 00:17:05.189 "num_base_bdevs": 2, 00:17:05.189 "num_base_bdevs_discovered": 2, 00:17:05.189 "num_base_bdevs_operational": 2, 00:17:05.189 "process": { 00:17:05.189 "type": "rebuild", 00:17:05.189 "target": "spare", 00:17:05.189 "progress": { 00:17:05.189 "blocks": 2560, 00:17:05.189 "percent": 32 00:17:05.189 } 00:17:05.189 }, 00:17:05.189 "base_bdevs_list": [ 00:17:05.189 { 00:17:05.189 "name": "spare", 00:17:05.189 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:05.189 "is_configured": true, 00:17:05.189 "data_offset": 256, 00:17:05.189 "data_size": 7936 00:17:05.189 }, 00:17:05.189 { 00:17:05.189 "name": "BaseBdev2", 00:17:05.189 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:05.189 "is_configured": true, 00:17:05.189 "data_offset": 256, 00:17:05.189 "data_size": 7936 00:17:05.189 } 00:17:05.189 ] 00:17:05.189 }' 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.189 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.189 [2024-11-20 03:23:54.807106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.449 [2024-11-20 03:23:54.858897] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:05.449 [2024-11-20 03:23:54.858962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.449 [2024-11-20 03:23:54.858979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.449 [2024-11-20 03:23:54.858990] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.449 "name": "raid_bdev1", 00:17:05.449 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:05.449 "strip_size_kb": 0, 00:17:05.449 "state": "online", 00:17:05.449 "raid_level": "raid1", 00:17:05.449 "superblock": true, 00:17:05.449 "num_base_bdevs": 2, 00:17:05.449 "num_base_bdevs_discovered": 1, 00:17:05.449 "num_base_bdevs_operational": 1, 00:17:05.449 "base_bdevs_list": [ 00:17:05.449 { 00:17:05.449 "name": null, 00:17:05.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.449 "is_configured": false, 00:17:05.449 "data_offset": 0, 00:17:05.449 "data_size": 7936 00:17:05.449 }, 00:17:05.449 { 00:17:05.449 "name": "BaseBdev2", 00:17:05.449 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:05.449 "is_configured": true, 00:17:05.449 "data_offset": 256, 00:17:05.449 "data_size": 7936 00:17:05.449 } 00:17:05.449 ] 00:17:05.449 }' 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.449 03:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 03:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.709 03:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.709 03:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 [2024-11-20 03:23:55.336749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.709 [2024-11-20 03:23:55.336814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.709 [2024-11-20 03:23:55.336837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:05.709 [2024-11-20 03:23:55.336851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.709 [2024-11-20 03:23:55.337354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.709 [2024-11-20 03:23:55.337388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.709 [2024-11-20 03:23:55.337477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:05.709 [2024-11-20 03:23:55.337501] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.709 [2024-11-20 03:23:55.337512] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.709 [2024-11-20 03:23:55.337541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.971 [2024-11-20 03:23:55.352877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:05.971 spare 00:17:05.971 03:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.971 03:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:05.971 [2024-11-20 03:23:55.354951] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.913 "name": "raid_bdev1", 00:17:06.913 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:06.913 "strip_size_kb": 0, 00:17:06.913 "state": "online", 00:17:06.913 "raid_level": "raid1", 00:17:06.913 "superblock": true, 00:17:06.913 "num_base_bdevs": 2, 00:17:06.913 "num_base_bdevs_discovered": 2, 00:17:06.913 "num_base_bdevs_operational": 2, 00:17:06.913 "process": { 00:17:06.913 "type": "rebuild", 00:17:06.913 "target": "spare", 00:17:06.913 "progress": { 00:17:06.913 "blocks": 2560, 00:17:06.913 "percent": 32 00:17:06.913 } 00:17:06.913 }, 00:17:06.913 "base_bdevs_list": [ 00:17:06.913 { 00:17:06.913 "name": "spare", 00:17:06.913 "uuid": "9b3b437a-16cd-5c08-ba6b-17400e099383", 00:17:06.913 "is_configured": true, 00:17:06.913 "data_offset": 256, 00:17:06.913 "data_size": 7936 00:17:06.913 }, 00:17:06.913 { 00:17:06.913 "name": "BaseBdev2", 00:17:06.913 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:06.913 "is_configured": true, 00:17:06.913 "data_offset": 256, 00:17:06.913 "data_size": 7936 00:17:06.913 } 00:17:06.913 ] 00:17:06.913 }' 00:17:06.913 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.914 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.914 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.914 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.914 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.914 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.914 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.914 [2024-11-20 03:23:56.514974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.173 [2024-11-20 03:23:56.562824] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.173 [2024-11-20 03:23:56.562885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.173 [2024-11-20 03:23:56.562906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.173 [2024-11-20 03:23:56.562914] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.173 "name": "raid_bdev1", 00:17:07.173 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:07.173 "strip_size_kb": 0, 00:17:07.173 "state": "online", 00:17:07.173 "raid_level": "raid1", 00:17:07.173 "superblock": true, 00:17:07.173 "num_base_bdevs": 2, 00:17:07.173 "num_base_bdevs_discovered": 1, 00:17:07.173 "num_base_bdevs_operational": 1, 00:17:07.173 "base_bdevs_list": [ 00:17:07.173 { 00:17:07.173 "name": null, 00:17:07.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.173 "is_configured": false, 00:17:07.173 "data_offset": 0, 00:17:07.173 "data_size": 7936 00:17:07.173 }, 00:17:07.173 { 00:17:07.173 "name": "BaseBdev2", 00:17:07.173 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:07.173 "is_configured": true, 00:17:07.173 "data_offset": 256, 00:17:07.173 "data_size": 7936 00:17:07.173 } 00:17:07.173 ] 00:17:07.173 }' 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.173 03:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.433 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.433 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.433 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.433 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.433 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.433 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.434 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.434 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.434 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.434 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.434 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.434 "name": "raid_bdev1", 00:17:07.434 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:07.434 "strip_size_kb": 0, 00:17:07.434 "state": "online", 00:17:07.434 "raid_level": "raid1", 00:17:07.434 "superblock": true, 00:17:07.434 "num_base_bdevs": 2, 00:17:07.434 "num_base_bdevs_discovered": 1, 00:17:07.434 "num_base_bdevs_operational": 1, 00:17:07.434 "base_bdevs_list": [ 00:17:07.434 { 00:17:07.434 "name": null, 00:17:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.434 "is_configured": false, 00:17:07.434 "data_offset": 0, 00:17:07.434 "data_size": 7936 00:17:07.434 }, 00:17:07.434 { 00:17:07.434 "name": "BaseBdev2", 00:17:07.434 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:07.434 "is_configured": true, 00:17:07.434 "data_offset": 256, 00:17:07.434 "data_size": 7936 00:17:07.434 } 00:17:07.434 ] 00:17:07.434 }' 00:17:07.434 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.694 [2024-11-20 03:23:57.160732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.694 [2024-11-20 03:23:57.160787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.694 [2024-11-20 03:23:57.160815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:07.694 [2024-11-20 03:23:57.160836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.694 [2024-11-20 03:23:57.161303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.694 [2024-11-20 03:23:57.161330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.694 [2024-11-20 03:23:57.161416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:07.694 [2024-11-20 03:23:57.161437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.694 [2024-11-20 03:23:57.161453] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:07.694 [2024-11-20 03:23:57.161466] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:07.694 BaseBdev1 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.694 03:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.633 "name": "raid_bdev1", 00:17:08.633 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:08.633 "strip_size_kb": 0, 00:17:08.633 "state": "online", 00:17:08.633 "raid_level": "raid1", 00:17:08.633 "superblock": true, 00:17:08.633 "num_base_bdevs": 2, 00:17:08.633 "num_base_bdevs_discovered": 1, 00:17:08.633 "num_base_bdevs_operational": 1, 00:17:08.633 "base_bdevs_list": [ 00:17:08.633 { 00:17:08.633 "name": null, 00:17:08.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.633 "is_configured": false, 00:17:08.633 "data_offset": 0, 00:17:08.633 "data_size": 7936 00:17:08.633 }, 00:17:08.633 { 00:17:08.633 "name": "BaseBdev2", 00:17:08.633 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:08.633 "is_configured": true, 00:17:08.633 "data_offset": 256, 00:17:08.633 "data_size": 7936 00:17:08.633 } 00:17:08.633 ] 00:17:08.633 }' 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.633 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.202 "name": "raid_bdev1", 00:17:09.202 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:09.202 "strip_size_kb": 0, 00:17:09.202 "state": "online", 00:17:09.202 "raid_level": "raid1", 00:17:09.202 "superblock": true, 00:17:09.202 "num_base_bdevs": 2, 00:17:09.202 "num_base_bdevs_discovered": 1, 00:17:09.202 "num_base_bdevs_operational": 1, 00:17:09.202 "base_bdevs_list": [ 00:17:09.202 { 00:17:09.202 "name": null, 00:17:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.202 "is_configured": false, 00:17:09.202 "data_offset": 0, 00:17:09.202 "data_size": 7936 00:17:09.202 }, 00:17:09.202 { 00:17:09.202 "name": "BaseBdev2", 00:17:09.202 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:09.202 "is_configured": true, 00:17:09.202 "data_offset": 256, 00:17:09.202 "data_size": 7936 00:17:09.202 } 00:17:09.202 ] 00:17:09.202 }' 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.202 [2024-11-20 03:23:58.706738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.202 [2024-11-20 03:23:58.706866] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.202 [2024-11-20 03:23:58.706887] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:09.202 request: 00:17:09.202 { 00:17:09.202 "base_bdev": "BaseBdev1", 00:17:09.202 "raid_bdev": "raid_bdev1", 00:17:09.202 "method": "bdev_raid_add_base_bdev", 00:17:09.202 "req_id": 1 00:17:09.202 } 00:17:09.202 Got JSON-RPC error response 00:17:09.202 response: 00:17:09.202 { 00:17:09.202 "code": -22, 00:17:09.202 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:09.202 } 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.202 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.203 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.203 03:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.141 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.400 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.400 "name": "raid_bdev1", 00:17:10.400 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:10.400 "strip_size_kb": 0, 00:17:10.400 "state": "online", 00:17:10.400 "raid_level": "raid1", 00:17:10.400 "superblock": true, 00:17:10.400 "num_base_bdevs": 2, 00:17:10.400 "num_base_bdevs_discovered": 1, 00:17:10.400 "num_base_bdevs_operational": 1, 00:17:10.400 "base_bdevs_list": [ 00:17:10.400 { 00:17:10.400 "name": null, 00:17:10.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.400 "is_configured": false, 00:17:10.400 "data_offset": 0, 00:17:10.400 "data_size": 7936 00:17:10.400 }, 00:17:10.400 { 00:17:10.400 "name": "BaseBdev2", 00:17:10.400 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:10.400 "is_configured": true, 00:17:10.400 "data_offset": 256, 00:17:10.400 "data_size": 7936 00:17:10.400 } 00:17:10.400 ] 00:17:10.400 }' 00:17:10.400 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.400 03:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.660 "name": "raid_bdev1", 00:17:10.660 "uuid": "3fe23c77-2200-40a1-b5b4-c5a355108ee3", 00:17:10.660 "strip_size_kb": 0, 00:17:10.660 "state": "online", 00:17:10.660 "raid_level": "raid1", 00:17:10.660 "superblock": true, 00:17:10.660 "num_base_bdevs": 2, 00:17:10.660 "num_base_bdevs_discovered": 1, 00:17:10.660 "num_base_bdevs_operational": 1, 00:17:10.660 "base_bdevs_list": [ 00:17:10.660 { 00:17:10.660 "name": null, 00:17:10.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.660 "is_configured": false, 00:17:10.660 "data_offset": 0, 00:17:10.660 "data_size": 7936 00:17:10.660 }, 00:17:10.660 { 00:17:10.660 "name": "BaseBdev2", 00:17:10.660 "uuid": "1ce00a7a-6778-5afc-918c-a2b07e63e507", 00:17:10.660 "is_configured": true, 00:17:10.660 "data_offset": 256, 00:17:10.660 "data_size": 7936 00:17:10.660 } 00:17:10.660 ] 00:17:10.660 }' 00:17:10.660 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86318 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86318 ']' 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86318 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86318 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.920 killing process with pid 86318 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86318' 00:17:10.920 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86318 00:17:10.920 Received shutdown signal, test time was about 60.000000 seconds 00:17:10.920 00:17:10.921 Latency(us) 00:17:10.921 [2024-11-20T03:24:00.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.921 [2024-11-20T03:24:00.556Z] =================================================================================================================== 00:17:10.921 [2024-11-20T03:24:00.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.921 [2024-11-20 03:24:00.383656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.921 03:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86318 00:17:10.921 [2024-11-20 03:24:00.383763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.921 [2024-11-20 03:24:00.383807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.921 [2024-11-20 03:24:00.383821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:11.181 [2024-11-20 03:24:00.695663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.565 03:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:12.565 00:17:12.565 real 0m19.834s 00:17:12.565 user 0m25.960s 00:17:12.565 sys 0m2.541s 00:17:12.565 03:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.565 03:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 ************************************ 00:17:12.565 END TEST raid_rebuild_test_sb_4k 00:17:12.565 ************************************ 00:17:12.565 03:24:01 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:12.565 03:24:01 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:12.565 03:24:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:12.565 03:24:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.565 03:24:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 ************************************ 00:17:12.565 START TEST raid_state_function_test_sb_md_separate 00:17:12.565 ************************************ 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:12.565 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87004 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:12.566 Process raid pid: 87004 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87004' 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87004 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87004 ']' 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.566 03:24:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.566 [2024-11-20 03:24:02.008300] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:12.566 [2024-11-20 03:24:02.008438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.566 [2024-11-20 03:24:02.183469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.826 [2024-11-20 03:24:02.313070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.086 [2024-11-20 03:24:02.543576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.086 [2024-11-20 03:24:02.543632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.346 [2024-11-20 03:24:02.828485] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.346 [2024-11-20 03:24:02.828547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.346 [2024-11-20 03:24:02.828559] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.346 [2024-11-20 03:24:02.828570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.346 "name": "Existed_Raid", 00:17:13.346 "uuid": "c79c5524-6be7-4eed-a8a3-fa600e2e075b", 00:17:13.346 "strip_size_kb": 0, 00:17:13.346 "state": "configuring", 00:17:13.346 "raid_level": "raid1", 00:17:13.346 "superblock": true, 00:17:13.346 "num_base_bdevs": 2, 00:17:13.346 "num_base_bdevs_discovered": 0, 00:17:13.346 "num_base_bdevs_operational": 2, 00:17:13.346 "base_bdevs_list": [ 00:17:13.346 { 00:17:13.346 "name": "BaseBdev1", 00:17:13.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.346 "is_configured": false, 00:17:13.346 "data_offset": 0, 00:17:13.346 "data_size": 0 00:17:13.346 }, 00:17:13.346 { 00:17:13.346 "name": "BaseBdev2", 00:17:13.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.346 "is_configured": false, 00:17:13.346 "data_offset": 0, 00:17:13.346 "data_size": 0 00:17:13.346 } 00:17:13.346 ] 00:17:13.346 }' 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.346 03:24:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 [2024-11-20 03:24:03.263747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.916 [2024-11-20 03:24:03.263786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 [2024-11-20 03:24:03.275745] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.916 [2024-11-20 03:24:03.275783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.916 [2024-11-20 03:24:03.275792] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.916 [2024-11-20 03:24:03.275805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 [2024-11-20 03:24:03.329574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.916 BaseBdev1 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 [ 00:17:13.916 { 00:17:13.916 "name": "BaseBdev1", 00:17:13.916 "aliases": [ 00:17:13.916 "6e81a3dd-27e7-45b1-8b1d-2586d1b8aac5" 00:17:13.916 ], 00:17:13.916 "product_name": "Malloc disk", 00:17:13.916 "block_size": 4096, 00:17:13.916 "num_blocks": 8192, 00:17:13.916 "uuid": "6e81a3dd-27e7-45b1-8b1d-2586d1b8aac5", 00:17:13.916 "md_size": 32, 00:17:13.916 "md_interleave": false, 00:17:13.916 "dif_type": 0, 00:17:13.916 "assigned_rate_limits": { 00:17:13.916 "rw_ios_per_sec": 0, 00:17:13.916 "rw_mbytes_per_sec": 0, 00:17:13.916 "r_mbytes_per_sec": 0, 00:17:13.916 "w_mbytes_per_sec": 0 00:17:13.916 }, 00:17:13.916 "claimed": true, 00:17:13.916 "claim_type": "exclusive_write", 00:17:13.916 "zoned": false, 00:17:13.916 "supported_io_types": { 00:17:13.916 "read": true, 00:17:13.916 "write": true, 00:17:13.916 "unmap": true, 00:17:13.916 "flush": true, 00:17:13.916 "reset": true, 00:17:13.916 "nvme_admin": false, 00:17:13.916 "nvme_io": false, 00:17:13.916 "nvme_io_md": false, 00:17:13.916 "write_zeroes": true, 00:17:13.916 "zcopy": true, 00:17:13.916 "get_zone_info": false, 00:17:13.916 "zone_management": false, 00:17:13.916 "zone_append": false, 00:17:13.916 "compare": false, 00:17:13.916 "compare_and_write": false, 00:17:13.916 "abort": true, 00:17:13.916 "seek_hole": false, 00:17:13.916 "seek_data": false, 00:17:13.916 "copy": true, 00:17:13.916 "nvme_iov_md": false 00:17:13.916 }, 00:17:13.916 "memory_domains": [ 00:17:13.916 { 00:17:13.916 "dma_device_id": "system", 00:17:13.916 "dma_device_type": 1 00:17:13.916 }, 00:17:13.916 { 00:17:13.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.916 "dma_device_type": 2 00:17:13.916 } 00:17:13.916 ], 00:17:13.916 "driver_specific": {} 00:17:13.916 } 00:17:13.916 ] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.916 "name": "Existed_Raid", 00:17:13.916 "uuid": "968718a0-84bd-48bf-b54e-d9a4de7da945", 00:17:13.916 "strip_size_kb": 0, 00:17:13.916 "state": "configuring", 00:17:13.916 "raid_level": "raid1", 00:17:13.916 "superblock": true, 00:17:13.916 "num_base_bdevs": 2, 00:17:13.916 "num_base_bdevs_discovered": 1, 00:17:13.916 "num_base_bdevs_operational": 2, 00:17:13.916 "base_bdevs_list": [ 00:17:13.916 { 00:17:13.916 "name": "BaseBdev1", 00:17:13.916 "uuid": "6e81a3dd-27e7-45b1-8b1d-2586d1b8aac5", 00:17:13.916 "is_configured": true, 00:17:13.916 "data_offset": 256, 00:17:13.916 "data_size": 7936 00:17:13.916 }, 00:17:13.916 { 00:17:13.916 "name": "BaseBdev2", 00:17:13.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.916 "is_configured": false, 00:17:13.916 "data_offset": 0, 00:17:13.916 "data_size": 0 00:17:13.916 } 00:17:13.916 ] 00:17:13.916 }' 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.916 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.486 [2024-11-20 03:24:03.828735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.486 [2024-11-20 03:24:03.828782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.486 [2024-11-20 03:24:03.836772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.486 [2024-11-20 03:24:03.838705] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.486 [2024-11-20 03:24:03.838747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.486 "name": "Existed_Raid", 00:17:14.486 "uuid": "fbd635d7-e4e1-442e-82d4-bc03a9174bb8", 00:17:14.486 "strip_size_kb": 0, 00:17:14.486 "state": "configuring", 00:17:14.486 "raid_level": "raid1", 00:17:14.486 "superblock": true, 00:17:14.486 "num_base_bdevs": 2, 00:17:14.486 "num_base_bdevs_discovered": 1, 00:17:14.486 "num_base_bdevs_operational": 2, 00:17:14.486 "base_bdevs_list": [ 00:17:14.486 { 00:17:14.486 "name": "BaseBdev1", 00:17:14.486 "uuid": "6e81a3dd-27e7-45b1-8b1d-2586d1b8aac5", 00:17:14.486 "is_configured": true, 00:17:14.486 "data_offset": 256, 00:17:14.486 "data_size": 7936 00:17:14.486 }, 00:17:14.486 { 00:17:14.486 "name": "BaseBdev2", 00:17:14.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.486 "is_configured": false, 00:17:14.486 "data_offset": 0, 00:17:14.486 "data_size": 0 00:17:14.486 } 00:17:14.486 ] 00:17:14.486 }' 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.486 03:24:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.746 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:14.746 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.746 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.746 [2024-11-20 03:24:04.327819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.746 [2024-11-20 03:24:04.328069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:14.746 [2024-11-20 03:24:04.328085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:14.746 [2024-11-20 03:24:04.328185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:14.746 [2024-11-20 03:24:04.328325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:14.747 [2024-11-20 03:24:04.328337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:14.747 [2024-11-20 03:24:04.328424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.747 BaseBdev2 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.747 [ 00:17:14.747 { 00:17:14.747 "name": "BaseBdev2", 00:17:14.747 "aliases": [ 00:17:14.747 "49d2de23-15e0-4838-89db-a64b83981c3f" 00:17:14.747 ], 00:17:14.747 "product_name": "Malloc disk", 00:17:14.747 "block_size": 4096, 00:17:14.747 "num_blocks": 8192, 00:17:14.747 "uuid": "49d2de23-15e0-4838-89db-a64b83981c3f", 00:17:14.747 "md_size": 32, 00:17:14.747 "md_interleave": false, 00:17:14.747 "dif_type": 0, 00:17:14.747 "assigned_rate_limits": { 00:17:14.747 "rw_ios_per_sec": 0, 00:17:14.747 "rw_mbytes_per_sec": 0, 00:17:14.747 "r_mbytes_per_sec": 0, 00:17:14.747 "w_mbytes_per_sec": 0 00:17:14.747 }, 00:17:14.747 "claimed": true, 00:17:14.747 "claim_type": "exclusive_write", 00:17:14.747 "zoned": false, 00:17:14.747 "supported_io_types": { 00:17:14.747 "read": true, 00:17:14.747 "write": true, 00:17:14.747 "unmap": true, 00:17:14.747 "flush": true, 00:17:14.747 "reset": true, 00:17:14.747 "nvme_admin": false, 00:17:14.747 "nvme_io": false, 00:17:14.747 "nvme_io_md": false, 00:17:14.747 "write_zeroes": true, 00:17:14.747 "zcopy": true, 00:17:14.747 "get_zone_info": false, 00:17:14.747 "zone_management": false, 00:17:14.747 "zone_append": false, 00:17:14.747 "compare": false, 00:17:14.747 "compare_and_write": false, 00:17:14.747 "abort": true, 00:17:14.747 "seek_hole": false, 00:17:14.747 "seek_data": false, 00:17:14.747 "copy": true, 00:17:14.747 "nvme_iov_md": false 00:17:14.747 }, 00:17:14.747 "memory_domains": [ 00:17:14.747 { 00:17:14.747 "dma_device_id": "system", 00:17:14.747 "dma_device_type": 1 00:17:14.747 }, 00:17:14.747 { 00:17:14.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.747 "dma_device_type": 2 00:17:14.747 } 00:17:14.747 ], 00:17:14.747 "driver_specific": {} 00:17:14.747 } 00:17:14.747 ] 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.747 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.007 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.007 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.007 "name": "Existed_Raid", 00:17:15.007 "uuid": "fbd635d7-e4e1-442e-82d4-bc03a9174bb8", 00:17:15.007 "strip_size_kb": 0, 00:17:15.007 "state": "online", 00:17:15.007 "raid_level": "raid1", 00:17:15.007 "superblock": true, 00:17:15.007 "num_base_bdevs": 2, 00:17:15.007 "num_base_bdevs_discovered": 2, 00:17:15.007 "num_base_bdevs_operational": 2, 00:17:15.007 "base_bdevs_list": [ 00:17:15.007 { 00:17:15.007 "name": "BaseBdev1", 00:17:15.007 "uuid": "6e81a3dd-27e7-45b1-8b1d-2586d1b8aac5", 00:17:15.007 "is_configured": true, 00:17:15.007 "data_offset": 256, 00:17:15.007 "data_size": 7936 00:17:15.007 }, 00:17:15.007 { 00:17:15.007 "name": "BaseBdev2", 00:17:15.007 "uuid": "49d2de23-15e0-4838-89db-a64b83981c3f", 00:17:15.007 "is_configured": true, 00:17:15.007 "data_offset": 256, 00:17:15.007 "data_size": 7936 00:17:15.007 } 00:17:15.007 ] 00:17:15.007 }' 00:17:15.007 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.007 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.267 [2024-11-20 03:24:04.855262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.267 "name": "Existed_Raid", 00:17:15.267 "aliases": [ 00:17:15.267 "fbd635d7-e4e1-442e-82d4-bc03a9174bb8" 00:17:15.267 ], 00:17:15.267 "product_name": "Raid Volume", 00:17:15.267 "block_size": 4096, 00:17:15.267 "num_blocks": 7936, 00:17:15.267 "uuid": "fbd635d7-e4e1-442e-82d4-bc03a9174bb8", 00:17:15.267 "md_size": 32, 00:17:15.267 "md_interleave": false, 00:17:15.267 "dif_type": 0, 00:17:15.267 "assigned_rate_limits": { 00:17:15.267 "rw_ios_per_sec": 0, 00:17:15.267 "rw_mbytes_per_sec": 0, 00:17:15.267 "r_mbytes_per_sec": 0, 00:17:15.267 "w_mbytes_per_sec": 0 00:17:15.267 }, 00:17:15.267 "claimed": false, 00:17:15.267 "zoned": false, 00:17:15.267 "supported_io_types": { 00:17:15.267 "read": true, 00:17:15.267 "write": true, 00:17:15.267 "unmap": false, 00:17:15.267 "flush": false, 00:17:15.267 "reset": true, 00:17:15.267 "nvme_admin": false, 00:17:15.267 "nvme_io": false, 00:17:15.267 "nvme_io_md": false, 00:17:15.267 "write_zeroes": true, 00:17:15.267 "zcopy": false, 00:17:15.267 "get_zone_info": false, 00:17:15.267 "zone_management": false, 00:17:15.267 "zone_append": false, 00:17:15.267 "compare": false, 00:17:15.267 "compare_and_write": false, 00:17:15.267 "abort": false, 00:17:15.267 "seek_hole": false, 00:17:15.267 "seek_data": false, 00:17:15.267 "copy": false, 00:17:15.267 "nvme_iov_md": false 00:17:15.267 }, 00:17:15.267 "memory_domains": [ 00:17:15.267 { 00:17:15.267 "dma_device_id": "system", 00:17:15.267 "dma_device_type": 1 00:17:15.267 }, 00:17:15.267 { 00:17:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.267 "dma_device_type": 2 00:17:15.267 }, 00:17:15.267 { 00:17:15.267 "dma_device_id": "system", 00:17:15.267 "dma_device_type": 1 00:17:15.267 }, 00:17:15.267 { 00:17:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.267 "dma_device_type": 2 00:17:15.267 } 00:17:15.267 ], 00:17:15.267 "driver_specific": { 00:17:15.267 "raid": { 00:17:15.267 "uuid": "fbd635d7-e4e1-442e-82d4-bc03a9174bb8", 00:17:15.267 "strip_size_kb": 0, 00:17:15.267 "state": "online", 00:17:15.267 "raid_level": "raid1", 00:17:15.267 "superblock": true, 00:17:15.267 "num_base_bdevs": 2, 00:17:15.267 "num_base_bdevs_discovered": 2, 00:17:15.267 "num_base_bdevs_operational": 2, 00:17:15.267 "base_bdevs_list": [ 00:17:15.267 { 00:17:15.267 "name": "BaseBdev1", 00:17:15.267 "uuid": "6e81a3dd-27e7-45b1-8b1d-2586d1b8aac5", 00:17:15.267 "is_configured": true, 00:17:15.267 "data_offset": 256, 00:17:15.267 "data_size": 7936 00:17:15.267 }, 00:17:15.267 { 00:17:15.267 "name": "BaseBdev2", 00:17:15.267 "uuid": "49d2de23-15e0-4838-89db-a64b83981c3f", 00:17:15.267 "is_configured": true, 00:17:15.267 "data_offset": 256, 00:17:15.267 "data_size": 7936 00:17:15.267 } 00:17:15.267 ] 00:17:15.267 } 00:17:15.267 } 00:17:15.267 }' 00:17:15.267 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:15.527 BaseBdev2' 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.527 03:24:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.527 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.527 [2024-11-20 03:24:05.074742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.787 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.787 "name": "Existed_Raid", 00:17:15.787 "uuid": "fbd635d7-e4e1-442e-82d4-bc03a9174bb8", 00:17:15.787 "strip_size_kb": 0, 00:17:15.787 "state": "online", 00:17:15.787 "raid_level": "raid1", 00:17:15.787 "superblock": true, 00:17:15.787 "num_base_bdevs": 2, 00:17:15.787 "num_base_bdevs_discovered": 1, 00:17:15.788 "num_base_bdevs_operational": 1, 00:17:15.788 "base_bdevs_list": [ 00:17:15.788 { 00:17:15.788 "name": null, 00:17:15.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.788 "is_configured": false, 00:17:15.788 "data_offset": 0, 00:17:15.788 "data_size": 7936 00:17:15.788 }, 00:17:15.788 { 00:17:15.788 "name": "BaseBdev2", 00:17:15.788 "uuid": "49d2de23-15e0-4838-89db-a64b83981c3f", 00:17:15.788 "is_configured": true, 00:17:15.788 "data_offset": 256, 00:17:15.788 "data_size": 7936 00:17:15.788 } 00:17:15.788 ] 00:17:15.788 }' 00:17:15.788 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.788 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.048 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.048 [2024-11-20 03:24:05.654785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.048 [2024-11-20 03:24:05.654914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.308 [2024-11-20 03:24:05.763105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.308 [2024-11-20 03:24:05.763166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.308 [2024-11-20 03:24:05.763182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87004 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87004 ']' 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87004 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87004 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.308 killing process with pid 87004 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87004' 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87004 00:17:16.308 [2024-11-20 03:24:05.862815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.308 03:24:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87004 00:17:16.308 [2024-11-20 03:24:05.880170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.692 03:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.692 00:17:17.692 real 0m5.134s 00:17:17.692 user 0m7.258s 00:17:17.692 sys 0m0.959s 00:17:17.692 03:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.692 03:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.692 ************************************ 00:17:17.692 END TEST raid_state_function_test_sb_md_separate 00:17:17.692 ************************************ 00:17:17.692 03:24:07 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:17.692 03:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:17.692 03:24:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.692 03:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.692 ************************************ 00:17:17.692 START TEST raid_superblock_test_md_separate 00:17:17.692 ************************************ 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87256 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87256 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87256 ']' 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.692 03:24:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.692 [2024-11-20 03:24:07.204682] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:17.692 [2024-11-20 03:24:07.204783] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87256 ] 00:17:17.952 [2024-11-20 03:24:07.375548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.952 [2024-11-20 03:24:07.508187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.211 [2024-11-20 03:24:07.732956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.211 [2024-11-20 03:24:07.733022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.471 malloc1 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.471 [2024-11-20 03:24:08.084826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.471 [2024-11-20 03:24:08.084895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.471 [2024-11-20 03:24:08.084923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:18.471 [2024-11-20 03:24:08.084934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.471 [2024-11-20 03:24:08.087058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.471 [2024-11-20 03:24:08.087099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.471 pt1 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.471 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 malloc2 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 [2024-11-20 03:24:08.147845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.731 [2024-11-20 03:24:08.147902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.731 [2024-11-20 03:24:08.147927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:18.731 [2024-11-20 03:24:08.147938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.731 [2024-11-20 03:24:08.150015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.731 [2024-11-20 03:24:08.150050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.731 pt2 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 [2024-11-20 03:24:08.159858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.731 [2024-11-20 03:24:08.161856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.731 [2024-11-20 03:24:08.162059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:18.731 [2024-11-20 03:24:08.162083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.731 [2024-11-20 03:24:08.162168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:18.731 [2024-11-20 03:24:08.162298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:18.731 [2024-11-20 03:24:08.162319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:18.731 [2024-11-20 03:24:08.162445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.731 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.732 "name": "raid_bdev1", 00:17:18.732 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:18.732 "strip_size_kb": 0, 00:17:18.732 "state": "online", 00:17:18.732 "raid_level": "raid1", 00:17:18.732 "superblock": true, 00:17:18.732 "num_base_bdevs": 2, 00:17:18.732 "num_base_bdevs_discovered": 2, 00:17:18.732 "num_base_bdevs_operational": 2, 00:17:18.732 "base_bdevs_list": [ 00:17:18.732 { 00:17:18.732 "name": "pt1", 00:17:18.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.732 "is_configured": true, 00:17:18.732 "data_offset": 256, 00:17:18.732 "data_size": 7936 00:17:18.732 }, 00:17:18.732 { 00:17:18.732 "name": "pt2", 00:17:18.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.732 "is_configured": true, 00:17:18.732 "data_offset": 256, 00:17:18.732 "data_size": 7936 00:17:18.732 } 00:17:18.732 ] 00:17:18.732 }' 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.732 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.991 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:18.991 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:18.991 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:18.991 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:18.991 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:18.992 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:18.992 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.992 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.992 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.992 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:18.992 [2024-11-20 03:24:08.623724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.252 "name": "raid_bdev1", 00:17:19.252 "aliases": [ 00:17:19.252 "0f328062-e932-497b-9595-76adaa06963c" 00:17:19.252 ], 00:17:19.252 "product_name": "Raid Volume", 00:17:19.252 "block_size": 4096, 00:17:19.252 "num_blocks": 7936, 00:17:19.252 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:19.252 "md_size": 32, 00:17:19.252 "md_interleave": false, 00:17:19.252 "dif_type": 0, 00:17:19.252 "assigned_rate_limits": { 00:17:19.252 "rw_ios_per_sec": 0, 00:17:19.252 "rw_mbytes_per_sec": 0, 00:17:19.252 "r_mbytes_per_sec": 0, 00:17:19.252 "w_mbytes_per_sec": 0 00:17:19.252 }, 00:17:19.252 "claimed": false, 00:17:19.252 "zoned": false, 00:17:19.252 "supported_io_types": { 00:17:19.252 "read": true, 00:17:19.252 "write": true, 00:17:19.252 "unmap": false, 00:17:19.252 "flush": false, 00:17:19.252 "reset": true, 00:17:19.252 "nvme_admin": false, 00:17:19.252 "nvme_io": false, 00:17:19.252 "nvme_io_md": false, 00:17:19.252 "write_zeroes": true, 00:17:19.252 "zcopy": false, 00:17:19.252 "get_zone_info": false, 00:17:19.252 "zone_management": false, 00:17:19.252 "zone_append": false, 00:17:19.252 "compare": false, 00:17:19.252 "compare_and_write": false, 00:17:19.252 "abort": false, 00:17:19.252 "seek_hole": false, 00:17:19.252 "seek_data": false, 00:17:19.252 "copy": false, 00:17:19.252 "nvme_iov_md": false 00:17:19.252 }, 00:17:19.252 "memory_domains": [ 00:17:19.252 { 00:17:19.252 "dma_device_id": "system", 00:17:19.252 "dma_device_type": 1 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.252 "dma_device_type": 2 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "dma_device_id": "system", 00:17:19.252 "dma_device_type": 1 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.252 "dma_device_type": 2 00:17:19.252 } 00:17:19.252 ], 00:17:19.252 "driver_specific": { 00:17:19.252 "raid": { 00:17:19.252 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:19.252 "strip_size_kb": 0, 00:17:19.252 "state": "online", 00:17:19.252 "raid_level": "raid1", 00:17:19.252 "superblock": true, 00:17:19.252 "num_base_bdevs": 2, 00:17:19.252 "num_base_bdevs_discovered": 2, 00:17:19.252 "num_base_bdevs_operational": 2, 00:17:19.252 "base_bdevs_list": [ 00:17:19.252 { 00:17:19.252 "name": "pt1", 00:17:19.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.252 "is_configured": true, 00:17:19.252 "data_offset": 256, 00:17:19.252 "data_size": 7936 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "name": "pt2", 00:17:19.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.252 "is_configured": true, 00:17:19.252 "data_offset": 256, 00:17:19.252 "data_size": 7936 00:17:19.252 } 00:17:19.252 ] 00:17:19.252 } 00:17:19.252 } 00:17:19.252 }' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.252 pt2' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:19.252 [2024-11-20 03:24:08.827309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f328062-e932-497b-9595-76adaa06963c 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0f328062-e932-497b-9595-76adaa06963c ']' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 [2024-11-20 03:24:08.870993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.252 [2024-11-20 03:24:08.871020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.252 [2024-11-20 03:24:08.871103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.252 [2024-11-20 03:24:08.871153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.252 [2024-11-20 03:24:08.871166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.513 03:24:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 [2024-11-20 03:24:09.006770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:19.513 [2024-11-20 03:24:09.008786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:19.513 [2024-11-20 03:24:09.008862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:19.513 [2024-11-20 03:24:09.008905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:19.513 [2024-11-20 03:24:09.008920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.513 [2024-11-20 03:24:09.008930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:19.513 request: 00:17:19.513 { 00:17:19.513 "name": "raid_bdev1", 00:17:19.513 "raid_level": "raid1", 00:17:19.513 "base_bdevs": [ 00:17:19.513 "malloc1", 00:17:19.513 "malloc2" 00:17:19.513 ], 00:17:19.513 "superblock": false, 00:17:19.513 "method": "bdev_raid_create", 00:17:19.513 "req_id": 1 00:17:19.513 } 00:17:19.513 Got JSON-RPC error response 00:17:19.513 response: 00:17:19.513 { 00:17:19.513 "code": -17, 00:17:19.513 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:19.513 } 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.513 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.513 [2024-11-20 03:24:09.070723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.513 [2024-11-20 03:24:09.070768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.513 [2024-11-20 03:24:09.070783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:19.513 [2024-11-20 03:24:09.070795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.513 [2024-11-20 03:24:09.072849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.513 [2024-11-20 03:24:09.072887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.513 [2024-11-20 03:24:09.072928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.513 [2024-11-20 03:24:09.072980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.513 pt1 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.514 "name": "raid_bdev1", 00:17:19.514 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:19.514 "strip_size_kb": 0, 00:17:19.514 "state": "configuring", 00:17:19.514 "raid_level": "raid1", 00:17:19.514 "superblock": true, 00:17:19.514 "num_base_bdevs": 2, 00:17:19.514 "num_base_bdevs_discovered": 1, 00:17:19.514 "num_base_bdevs_operational": 2, 00:17:19.514 "base_bdevs_list": [ 00:17:19.514 { 00:17:19.514 "name": "pt1", 00:17:19.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.514 "is_configured": true, 00:17:19.514 "data_offset": 256, 00:17:19.514 "data_size": 7936 00:17:19.514 }, 00:17:19.514 { 00:17:19.514 "name": null, 00:17:19.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.514 "is_configured": false, 00:17:19.514 "data_offset": 256, 00:17:19.514 "data_size": 7936 00:17:19.514 } 00:17:19.514 ] 00:17:19.514 }' 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.514 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.083 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:20.083 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:20.083 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:20.083 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.083 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.083 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.083 [2024-11-20 03:24:09.494702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.083 [2024-11-20 03:24:09.494758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.083 [2024-11-20 03:24:09.494775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:20.083 [2024-11-20 03:24:09.494786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.083 [2024-11-20 03:24:09.494934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.083 [2024-11-20 03:24:09.494950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.083 [2024-11-20 03:24:09.494985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.084 [2024-11-20 03:24:09.495005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.084 [2024-11-20 03:24:09.495092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.084 [2024-11-20 03:24:09.495103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.084 [2024-11-20 03:24:09.495165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:20.084 [2024-11-20 03:24:09.495275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.084 [2024-11-20 03:24:09.495283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:20.084 [2024-11-20 03:24:09.495370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.084 pt2 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.084 "name": "raid_bdev1", 00:17:20.084 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:20.084 "strip_size_kb": 0, 00:17:20.084 "state": "online", 00:17:20.084 "raid_level": "raid1", 00:17:20.084 "superblock": true, 00:17:20.084 "num_base_bdevs": 2, 00:17:20.084 "num_base_bdevs_discovered": 2, 00:17:20.084 "num_base_bdevs_operational": 2, 00:17:20.084 "base_bdevs_list": [ 00:17:20.084 { 00:17:20.084 "name": "pt1", 00:17:20.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.084 "is_configured": true, 00:17:20.084 "data_offset": 256, 00:17:20.084 "data_size": 7936 00:17:20.084 }, 00:17:20.084 { 00:17:20.084 "name": "pt2", 00:17:20.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.084 "is_configured": true, 00:17:20.084 "data_offset": 256, 00:17:20.084 "data_size": 7936 00:17:20.084 } 00:17:20.084 ] 00:17:20.084 }' 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.084 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:20.344 [2024-11-20 03:24:09.898929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.344 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:20.344 "name": "raid_bdev1", 00:17:20.344 "aliases": [ 00:17:20.344 "0f328062-e932-497b-9595-76adaa06963c" 00:17:20.344 ], 00:17:20.344 "product_name": "Raid Volume", 00:17:20.344 "block_size": 4096, 00:17:20.344 "num_blocks": 7936, 00:17:20.344 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:20.344 "md_size": 32, 00:17:20.344 "md_interleave": false, 00:17:20.344 "dif_type": 0, 00:17:20.344 "assigned_rate_limits": { 00:17:20.344 "rw_ios_per_sec": 0, 00:17:20.344 "rw_mbytes_per_sec": 0, 00:17:20.344 "r_mbytes_per_sec": 0, 00:17:20.344 "w_mbytes_per_sec": 0 00:17:20.344 }, 00:17:20.344 "claimed": false, 00:17:20.344 "zoned": false, 00:17:20.344 "supported_io_types": { 00:17:20.344 "read": true, 00:17:20.344 "write": true, 00:17:20.344 "unmap": false, 00:17:20.344 "flush": false, 00:17:20.344 "reset": true, 00:17:20.344 "nvme_admin": false, 00:17:20.345 "nvme_io": false, 00:17:20.345 "nvme_io_md": false, 00:17:20.345 "write_zeroes": true, 00:17:20.345 "zcopy": false, 00:17:20.345 "get_zone_info": false, 00:17:20.345 "zone_management": false, 00:17:20.345 "zone_append": false, 00:17:20.345 "compare": false, 00:17:20.345 "compare_and_write": false, 00:17:20.345 "abort": false, 00:17:20.345 "seek_hole": false, 00:17:20.345 "seek_data": false, 00:17:20.345 "copy": false, 00:17:20.345 "nvme_iov_md": false 00:17:20.345 }, 00:17:20.345 "memory_domains": [ 00:17:20.345 { 00:17:20.345 "dma_device_id": "system", 00:17:20.345 "dma_device_type": 1 00:17:20.345 }, 00:17:20.345 { 00:17:20.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.345 "dma_device_type": 2 00:17:20.345 }, 00:17:20.345 { 00:17:20.345 "dma_device_id": "system", 00:17:20.345 "dma_device_type": 1 00:17:20.345 }, 00:17:20.345 { 00:17:20.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.345 "dma_device_type": 2 00:17:20.345 } 00:17:20.345 ], 00:17:20.345 "driver_specific": { 00:17:20.345 "raid": { 00:17:20.345 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:20.345 "strip_size_kb": 0, 00:17:20.345 "state": "online", 00:17:20.345 "raid_level": "raid1", 00:17:20.345 "superblock": true, 00:17:20.345 "num_base_bdevs": 2, 00:17:20.345 "num_base_bdevs_discovered": 2, 00:17:20.345 "num_base_bdevs_operational": 2, 00:17:20.345 "base_bdevs_list": [ 00:17:20.345 { 00:17:20.345 "name": "pt1", 00:17:20.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.345 "is_configured": true, 00:17:20.345 "data_offset": 256, 00:17:20.345 "data_size": 7936 00:17:20.345 }, 00:17:20.345 { 00:17:20.345 "name": "pt2", 00:17:20.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.345 "is_configured": true, 00:17:20.345 "data_offset": 256, 00:17:20.345 "data_size": 7936 00:17:20.345 } 00:17:20.345 ] 00:17:20.345 } 00:17:20.345 } 00:17:20.345 }' 00:17:20.345 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.605 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:20.605 pt2' 00:17:20.605 03:24:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.605 [2024-11-20 03:24:10.130943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0f328062-e932-497b-9595-76adaa06963c '!=' 0f328062-e932-497b-9595-76adaa06963c ']' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.605 [2024-11-20 03:24:10.174749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.605 "name": "raid_bdev1", 00:17:20.605 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:20.605 "strip_size_kb": 0, 00:17:20.605 "state": "online", 00:17:20.605 "raid_level": "raid1", 00:17:20.605 "superblock": true, 00:17:20.605 "num_base_bdevs": 2, 00:17:20.605 "num_base_bdevs_discovered": 1, 00:17:20.605 "num_base_bdevs_operational": 1, 00:17:20.605 "base_bdevs_list": [ 00:17:20.605 { 00:17:20.605 "name": null, 00:17:20.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.605 "is_configured": false, 00:17:20.605 "data_offset": 0, 00:17:20.605 "data_size": 7936 00:17:20.605 }, 00:17:20.605 { 00:17:20.605 "name": "pt2", 00:17:20.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.605 "is_configured": true, 00:17:20.605 "data_offset": 256, 00:17:20.605 "data_size": 7936 00:17:20.605 } 00:17:20.605 ] 00:17:20.605 }' 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.605 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.176 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.176 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.176 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.177 [2024-11-20 03:24:10.630711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.177 [2024-11-20 03:24:10.630736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.177 [2024-11-20 03:24:10.630783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.177 [2024-11-20 03:24:10.630816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.177 [2024-11-20 03:24:10.630828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.177 [2024-11-20 03:24:10.698733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.177 [2024-11-20 03:24:10.698784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.177 [2024-11-20 03:24:10.698800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:21.177 [2024-11-20 03:24:10.698812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.177 [2024-11-20 03:24:10.700861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.177 [2024-11-20 03:24:10.700899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.177 [2024-11-20 03:24:10.700938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.177 [2024-11-20 03:24:10.700980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.177 [2024-11-20 03:24:10.701069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:21.177 [2024-11-20 03:24:10.701089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:21.177 [2024-11-20 03:24:10.701150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.177 [2024-11-20 03:24:10.701252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:21.177 [2024-11-20 03:24:10.701260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:21.177 [2024-11-20 03:24:10.701357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.177 pt2 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.177 "name": "raid_bdev1", 00:17:21.177 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:21.177 "strip_size_kb": 0, 00:17:21.177 "state": "online", 00:17:21.177 "raid_level": "raid1", 00:17:21.177 "superblock": true, 00:17:21.177 "num_base_bdevs": 2, 00:17:21.177 "num_base_bdevs_discovered": 1, 00:17:21.177 "num_base_bdevs_operational": 1, 00:17:21.177 "base_bdevs_list": [ 00:17:21.177 { 00:17:21.177 "name": null, 00:17:21.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.177 "is_configured": false, 00:17:21.177 "data_offset": 256, 00:17:21.177 "data_size": 7936 00:17:21.177 }, 00:17:21.177 { 00:17:21.177 "name": "pt2", 00:17:21.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.177 "is_configured": true, 00:17:21.177 "data_offset": 256, 00:17:21.177 "data_size": 7936 00:17:21.177 } 00:17:21.177 ] 00:17:21.177 }' 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.177 03:24:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.747 [2024-11-20 03:24:11.114691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.747 [2024-11-20 03:24:11.114717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.747 [2024-11-20 03:24:11.114759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.747 [2024-11-20 03:24:11.114794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.747 [2024-11-20 03:24:11.114803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.747 [2024-11-20 03:24:11.174742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:21.747 [2024-11-20 03:24:11.174834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.747 [2024-11-20 03:24:11.174869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:21.747 [2024-11-20 03:24:11.174900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.747 [2024-11-20 03:24:11.177000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.747 [2024-11-20 03:24:11.177080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:21.747 [2024-11-20 03:24:11.177146] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:21.747 [2024-11-20 03:24:11.177199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:21.747 [2024-11-20 03:24:11.177340] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:21.747 [2024-11-20 03:24:11.177397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.747 [2024-11-20 03:24:11.177443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:21.747 [2024-11-20 03:24:11.177553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.747 [2024-11-20 03:24:11.177670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:21.747 [2024-11-20 03:24:11.177713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:21.747 [2024-11-20 03:24:11.177797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:21.747 [2024-11-20 03:24:11.177923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:21.747 [2024-11-20 03:24:11.177966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:21.747 [2024-11-20 03:24:11.178095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.747 pt1 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.747 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.748 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.748 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.748 "name": "raid_bdev1", 00:17:21.748 "uuid": "0f328062-e932-497b-9595-76adaa06963c", 00:17:21.748 "strip_size_kb": 0, 00:17:21.748 "state": "online", 00:17:21.748 "raid_level": "raid1", 00:17:21.748 "superblock": true, 00:17:21.748 "num_base_bdevs": 2, 00:17:21.748 "num_base_bdevs_discovered": 1, 00:17:21.748 "num_base_bdevs_operational": 1, 00:17:21.748 "base_bdevs_list": [ 00:17:21.748 { 00:17:21.748 "name": null, 00:17:21.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.748 "is_configured": false, 00:17:21.748 "data_offset": 256, 00:17:21.748 "data_size": 7936 00:17:21.748 }, 00:17:21.748 { 00:17:21.748 "name": "pt2", 00:17:21.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.748 "is_configured": true, 00:17:21.748 "data_offset": 256, 00:17:21.748 "data_size": 7936 00:17:21.748 } 00:17:21.748 ] 00:17:21.748 }' 00:17:21.748 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.748 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.008 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:22.008 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.008 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.008 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:22.008 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.268 [2024-11-20 03:24:11.678896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0f328062-e932-497b-9595-76adaa06963c '!=' 0f328062-e932-497b-9595-76adaa06963c ']' 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87256 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87256 ']' 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87256 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87256 00:17:22.268 killing process with pid 87256 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87256' 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87256 00:17:22.268 [2024-11-20 03:24:11.744841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.268 [2024-11-20 03:24:11.744903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.268 [2024-11-20 03:24:11.744935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.268 [2024-11-20 03:24:11.744951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:22.268 03:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87256 00:17:22.528 [2024-11-20 03:24:11.968533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.911 03:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:23.911 00:17:23.911 real 0m5.990s 00:17:23.911 user 0m8.954s 00:17:23.911 sys 0m1.089s 00:17:23.911 03:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.911 ************************************ 00:17:23.911 END TEST raid_superblock_test_md_separate 00:17:23.911 ************************************ 00:17:23.911 03:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.911 03:24:13 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:23.911 03:24:13 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:23.911 03:24:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:23.911 03:24:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.911 03:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.911 ************************************ 00:17:23.911 START TEST raid_rebuild_test_sb_md_separate 00:17:23.911 ************************************ 00:17:23.911 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:23.911 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:23.911 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:23.911 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:23.911 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:23.911 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87579 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87579 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87579 ']' 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.912 03:24:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.912 [2024-11-20 03:24:13.302386] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:23.912 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:23.912 Zero copy mechanism will not be used. 00:17:23.912 [2024-11-20 03:24:13.302642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87579 ] 00:17:23.912 [2024-11-20 03:24:13.485195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.172 [2024-11-20 03:24:13.618737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.433 [2024-11-20 03:24:13.850927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.433 [2024-11-20 03:24:13.850970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 BaseBdev1_malloc 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 [2024-11-20 03:24:14.140225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:24.693 [2024-11-20 03:24:14.140307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.693 [2024-11-20 03:24:14.140333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:24.693 [2024-11-20 03:24:14.140348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.693 [2024-11-20 03:24:14.142518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.693 [2024-11-20 03:24:14.142661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.693 BaseBdev1 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 BaseBdev2_malloc 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 [2024-11-20 03:24:14.202420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:24.694 [2024-11-20 03:24:14.202567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.694 [2024-11-20 03:24:14.202594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.694 [2024-11-20 03:24:14.202608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.694 [2024-11-20 03:24:14.204679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.694 [2024-11-20 03:24:14.204717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.694 BaseBdev2 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 spare_malloc 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 spare_delay 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 [2024-11-20 03:24:14.284445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:24.694 [2024-11-20 03:24:14.284510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.694 [2024-11-20 03:24:14.284533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:24.694 [2024-11-20 03:24:14.284546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.694 [2024-11-20 03:24:14.286684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.694 [2024-11-20 03:24:14.286724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:24.694 spare 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.694 [2024-11-20 03:24:14.296475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.694 [2024-11-20 03:24:14.298485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.694 [2024-11-20 03:24:14.298777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:24.694 [2024-11-20 03:24:14.298799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.694 [2024-11-20 03:24:14.298874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:24.694 [2024-11-20 03:24:14.299011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:24.694 [2024-11-20 03:24:14.299020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:24.694 [2024-11-20 03:24:14.299125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.694 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.954 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.954 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.954 "name": "raid_bdev1", 00:17:24.954 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:24.954 "strip_size_kb": 0, 00:17:24.954 "state": "online", 00:17:24.954 "raid_level": "raid1", 00:17:24.954 "superblock": true, 00:17:24.954 "num_base_bdevs": 2, 00:17:24.954 "num_base_bdevs_discovered": 2, 00:17:24.954 "num_base_bdevs_operational": 2, 00:17:24.954 "base_bdevs_list": [ 00:17:24.954 { 00:17:24.954 "name": "BaseBdev1", 00:17:24.954 "uuid": "a3db667a-6396-5740-a49e-331858c376ca", 00:17:24.954 "is_configured": true, 00:17:24.954 "data_offset": 256, 00:17:24.954 "data_size": 7936 00:17:24.954 }, 00:17:24.954 { 00:17:24.954 "name": "BaseBdev2", 00:17:24.954 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:24.954 "is_configured": true, 00:17:24.954 "data_offset": 256, 00:17:24.954 "data_size": 7936 00:17:24.954 } 00:17:24.954 ] 00:17:24.954 }' 00:17:24.954 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.954 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:25.214 [2024-11-20 03:24:14.691986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.214 03:24:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:25.474 [2024-11-20 03:24:14.967791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:25.474 /dev/nbd0 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.474 1+0 records in 00:17:25.474 1+0 records out 00:17:25.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417021 s, 9.8 MB/s 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.474 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:25.475 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:26.412 7936+0 records in 00:17:26.412 7936+0 records out 00:17:26.412 32505856 bytes (33 MB, 31 MiB) copied, 0.66614 s, 48.8 MB/s 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:26.412 [2024-11-20 03:24:15.927865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.412 [2024-11-20 03:24:15.955911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.412 03:24:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.412 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.412 "name": "raid_bdev1", 00:17:26.412 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:26.412 "strip_size_kb": 0, 00:17:26.412 "state": "online", 00:17:26.412 "raid_level": "raid1", 00:17:26.412 "superblock": true, 00:17:26.412 "num_base_bdevs": 2, 00:17:26.412 "num_base_bdevs_discovered": 1, 00:17:26.412 "num_base_bdevs_operational": 1, 00:17:26.412 "base_bdevs_list": [ 00:17:26.412 { 00:17:26.412 "name": null, 00:17:26.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.412 "is_configured": false, 00:17:26.412 "data_offset": 0, 00:17:26.412 "data_size": 7936 00:17:26.412 }, 00:17:26.412 { 00:17:26.412 "name": "BaseBdev2", 00:17:26.412 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:26.412 "is_configured": true, 00:17:26.412 "data_offset": 256, 00:17:26.412 "data_size": 7936 00:17:26.412 } 00:17:26.412 ] 00:17:26.412 }' 00:17:26.412 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.412 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.985 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.985 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.985 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.985 [2024-11-20 03:24:16.391576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.985 [2024-11-20 03:24:16.406636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:26.985 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.985 03:24:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:26.985 [2024-11-20 03:24:16.408688] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.947 "name": "raid_bdev1", 00:17:27.947 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:27.947 "strip_size_kb": 0, 00:17:27.947 "state": "online", 00:17:27.947 "raid_level": "raid1", 00:17:27.947 "superblock": true, 00:17:27.947 "num_base_bdevs": 2, 00:17:27.947 "num_base_bdevs_discovered": 2, 00:17:27.947 "num_base_bdevs_operational": 2, 00:17:27.947 "process": { 00:17:27.947 "type": "rebuild", 00:17:27.947 "target": "spare", 00:17:27.947 "progress": { 00:17:27.947 "blocks": 2560, 00:17:27.947 "percent": 32 00:17:27.947 } 00:17:27.947 }, 00:17:27.947 "base_bdevs_list": [ 00:17:27.947 { 00:17:27.947 "name": "spare", 00:17:27.947 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:27.947 "is_configured": true, 00:17:27.947 "data_offset": 256, 00:17:27.947 "data_size": 7936 00:17:27.947 }, 00:17:27.947 { 00:17:27.947 "name": "BaseBdev2", 00:17:27.947 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:27.947 "is_configured": true, 00:17:27.947 "data_offset": 256, 00:17:27.947 "data_size": 7936 00:17:27.947 } 00:17:27.947 ] 00:17:27.947 }' 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.947 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.947 [2024-11-20 03:24:17.545485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.207 [2024-11-20 03:24:17.617375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.207 [2024-11-20 03:24:17.617445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.207 [2024-11-20 03:24:17.617461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.207 [2024-11-20 03:24:17.617473] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.207 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.207 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.207 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.207 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.207 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.208 "name": "raid_bdev1", 00:17:28.208 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:28.208 "strip_size_kb": 0, 00:17:28.208 "state": "online", 00:17:28.208 "raid_level": "raid1", 00:17:28.208 "superblock": true, 00:17:28.208 "num_base_bdevs": 2, 00:17:28.208 "num_base_bdevs_discovered": 1, 00:17:28.208 "num_base_bdevs_operational": 1, 00:17:28.208 "base_bdevs_list": [ 00:17:28.208 { 00:17:28.208 "name": null, 00:17:28.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.208 "is_configured": false, 00:17:28.208 "data_offset": 0, 00:17:28.208 "data_size": 7936 00:17:28.208 }, 00:17:28.208 { 00:17:28.208 "name": "BaseBdev2", 00:17:28.208 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:28.208 "is_configured": true, 00:17:28.208 "data_offset": 256, 00:17:28.208 "data_size": 7936 00:17:28.208 } 00:17:28.208 ] 00:17:28.208 }' 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.208 03:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.468 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.729 "name": "raid_bdev1", 00:17:28.729 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:28.729 "strip_size_kb": 0, 00:17:28.729 "state": "online", 00:17:28.729 "raid_level": "raid1", 00:17:28.729 "superblock": true, 00:17:28.729 "num_base_bdevs": 2, 00:17:28.729 "num_base_bdevs_discovered": 1, 00:17:28.729 "num_base_bdevs_operational": 1, 00:17:28.729 "base_bdevs_list": [ 00:17:28.729 { 00:17:28.729 "name": null, 00:17:28.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.729 "is_configured": false, 00:17:28.729 "data_offset": 0, 00:17:28.729 "data_size": 7936 00:17:28.729 }, 00:17:28.729 { 00:17:28.729 "name": "BaseBdev2", 00:17:28.729 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:28.729 "is_configured": true, 00:17:28.729 "data_offset": 256, 00:17:28.729 "data_size": 7936 00:17:28.729 } 00:17:28.729 ] 00:17:28.729 }' 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.729 [2024-11-20 03:24:18.220739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.729 [2024-11-20 03:24:18.232994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.729 03:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:28.729 [2024-11-20 03:24:18.235080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.670 "name": "raid_bdev1", 00:17:29.670 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:29.670 "strip_size_kb": 0, 00:17:29.670 "state": "online", 00:17:29.670 "raid_level": "raid1", 00:17:29.670 "superblock": true, 00:17:29.670 "num_base_bdevs": 2, 00:17:29.670 "num_base_bdevs_discovered": 2, 00:17:29.670 "num_base_bdevs_operational": 2, 00:17:29.670 "process": { 00:17:29.670 "type": "rebuild", 00:17:29.670 "target": "spare", 00:17:29.670 "progress": { 00:17:29.670 "blocks": 2560, 00:17:29.670 "percent": 32 00:17:29.670 } 00:17:29.670 }, 00:17:29.670 "base_bdevs_list": [ 00:17:29.670 { 00:17:29.670 "name": "spare", 00:17:29.670 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:29.670 "is_configured": true, 00:17:29.670 "data_offset": 256, 00:17:29.670 "data_size": 7936 00:17:29.670 }, 00:17:29.670 { 00:17:29.670 "name": "BaseBdev2", 00:17:29.670 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:29.670 "is_configured": true, 00:17:29.670 "data_offset": 256, 00:17:29.670 "data_size": 7936 00:17:29.670 } 00:17:29.670 ] 00:17:29.670 }' 00:17:29.670 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:29.930 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=703 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.930 "name": "raid_bdev1", 00:17:29.930 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:29.930 "strip_size_kb": 0, 00:17:29.930 "state": "online", 00:17:29.930 "raid_level": "raid1", 00:17:29.930 "superblock": true, 00:17:29.930 "num_base_bdevs": 2, 00:17:29.930 "num_base_bdevs_discovered": 2, 00:17:29.930 "num_base_bdevs_operational": 2, 00:17:29.930 "process": { 00:17:29.930 "type": "rebuild", 00:17:29.930 "target": "spare", 00:17:29.930 "progress": { 00:17:29.930 "blocks": 2816, 00:17:29.930 "percent": 35 00:17:29.930 } 00:17:29.930 }, 00:17:29.930 "base_bdevs_list": [ 00:17:29.930 { 00:17:29.930 "name": "spare", 00:17:29.930 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:29.930 "is_configured": true, 00:17:29.930 "data_offset": 256, 00:17:29.930 "data_size": 7936 00:17:29.930 }, 00:17:29.930 { 00:17:29.930 "name": "BaseBdev2", 00:17:29.930 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:29.930 "is_configured": true, 00:17:29.930 "data_offset": 256, 00:17:29.930 "data_size": 7936 00:17:29.930 } 00:17:29.930 ] 00:17:29.930 }' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.930 03:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.871 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.871 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.871 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.871 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.871 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.871 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.132 "name": "raid_bdev1", 00:17:31.132 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:31.132 "strip_size_kb": 0, 00:17:31.132 "state": "online", 00:17:31.132 "raid_level": "raid1", 00:17:31.132 "superblock": true, 00:17:31.132 "num_base_bdevs": 2, 00:17:31.132 "num_base_bdevs_discovered": 2, 00:17:31.132 "num_base_bdevs_operational": 2, 00:17:31.132 "process": { 00:17:31.132 "type": "rebuild", 00:17:31.132 "target": "spare", 00:17:31.132 "progress": { 00:17:31.132 "blocks": 5632, 00:17:31.132 "percent": 70 00:17:31.132 } 00:17:31.132 }, 00:17:31.132 "base_bdevs_list": [ 00:17:31.132 { 00:17:31.132 "name": "spare", 00:17:31.132 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:31.132 "is_configured": true, 00:17:31.132 "data_offset": 256, 00:17:31.132 "data_size": 7936 00:17:31.132 }, 00:17:31.132 { 00:17:31.132 "name": "BaseBdev2", 00:17:31.132 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:31.132 "is_configured": true, 00:17:31.132 "data_offset": 256, 00:17:31.132 "data_size": 7936 00:17:31.132 } 00:17:31.132 ] 00:17:31.132 }' 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.132 03:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.073 [2024-11-20 03:24:21.355101] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:32.073 [2024-11-20 03:24:21.355193] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:32.073 [2024-11-20 03:24:21.355883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.073 "name": "raid_bdev1", 00:17:32.073 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:32.073 "strip_size_kb": 0, 00:17:32.073 "state": "online", 00:17:32.073 "raid_level": "raid1", 00:17:32.073 "superblock": true, 00:17:32.073 "num_base_bdevs": 2, 00:17:32.073 "num_base_bdevs_discovered": 2, 00:17:32.073 "num_base_bdevs_operational": 2, 00:17:32.073 "base_bdevs_list": [ 00:17:32.073 { 00:17:32.073 "name": "spare", 00:17:32.073 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:32.073 "is_configured": true, 00:17:32.073 "data_offset": 256, 00:17:32.073 "data_size": 7936 00:17:32.073 }, 00:17:32.073 { 00:17:32.073 "name": "BaseBdev2", 00:17:32.073 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:32.073 "is_configured": true, 00:17:32.073 "data_offset": 256, 00:17:32.073 "data_size": 7936 00:17:32.073 } 00:17:32.073 ] 00:17:32.073 }' 00:17:32.073 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.448 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.449 "name": "raid_bdev1", 00:17:32.449 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:32.449 "strip_size_kb": 0, 00:17:32.449 "state": "online", 00:17:32.449 "raid_level": "raid1", 00:17:32.449 "superblock": true, 00:17:32.449 "num_base_bdevs": 2, 00:17:32.449 "num_base_bdevs_discovered": 2, 00:17:32.449 "num_base_bdevs_operational": 2, 00:17:32.449 "base_bdevs_list": [ 00:17:32.449 { 00:17:32.449 "name": "spare", 00:17:32.449 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:32.449 "is_configured": true, 00:17:32.449 "data_offset": 256, 00:17:32.449 "data_size": 7936 00:17:32.449 }, 00:17:32.449 { 00:17:32.449 "name": "BaseBdev2", 00:17:32.449 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:32.449 "is_configured": true, 00:17:32.449 "data_offset": 256, 00:17:32.449 "data_size": 7936 00:17:32.449 } 00:17:32.449 ] 00:17:32.449 }' 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.449 "name": "raid_bdev1", 00:17:32.449 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:32.449 "strip_size_kb": 0, 00:17:32.449 "state": "online", 00:17:32.449 "raid_level": "raid1", 00:17:32.449 "superblock": true, 00:17:32.449 "num_base_bdevs": 2, 00:17:32.449 "num_base_bdevs_discovered": 2, 00:17:32.449 "num_base_bdevs_operational": 2, 00:17:32.449 "base_bdevs_list": [ 00:17:32.449 { 00:17:32.449 "name": "spare", 00:17:32.449 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:32.449 "is_configured": true, 00:17:32.449 "data_offset": 256, 00:17:32.449 "data_size": 7936 00:17:32.449 }, 00:17:32.449 { 00:17:32.449 "name": "BaseBdev2", 00:17:32.449 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:32.449 "is_configured": true, 00:17:32.449 "data_offset": 256, 00:17:32.449 "data_size": 7936 00:17:32.449 } 00:17:32.449 ] 00:17:32.449 }' 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.449 03:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.709 [2024-11-20 03:24:22.325266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.709 [2024-11-20 03:24:22.325391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.709 [2024-11-20 03:24:22.325508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.709 [2024-11-20 03:24:22.325594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.709 [2024-11-20 03:24:22.325714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.709 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.969 /dev/nbd0 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.969 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.229 1+0 records in 00:17:33.229 1+0 records out 00:17:33.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333447 s, 12.3 MB/s 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:33.229 /dev/nbd1 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.229 1+0 records in 00:17:33.229 1+0 records out 00:17:33.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249847 s, 16.4 MB/s 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:33.229 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.230 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.230 03:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.490 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.750 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 [2024-11-20 03:24:23.457350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.011 [2024-11-20 03:24:23.457406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.011 [2024-11-20 03:24:23.457428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:34.011 [2024-11-20 03:24:23.457438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.011 [2024-11-20 03:24:23.459375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.011 [2024-11-20 03:24:23.459457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.011 [2024-11-20 03:24:23.459527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.011 [2024-11-20 03:24:23.459596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.011 [2024-11-20 03:24:23.459756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.011 spare 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 [2024-11-20 03:24:23.559639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:34.011 [2024-11-20 03:24:23.559666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.011 [2024-11-20 03:24:23.559759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:34.011 [2024-11-20 03:24:23.559888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:34.011 [2024-11-20 03:24:23.559895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:34.011 [2024-11-20 03:24:23.560011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.011 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.012 "name": "raid_bdev1", 00:17:34.012 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:34.012 "strip_size_kb": 0, 00:17:34.012 "state": "online", 00:17:34.012 "raid_level": "raid1", 00:17:34.012 "superblock": true, 00:17:34.012 "num_base_bdevs": 2, 00:17:34.012 "num_base_bdevs_discovered": 2, 00:17:34.012 "num_base_bdevs_operational": 2, 00:17:34.012 "base_bdevs_list": [ 00:17:34.012 { 00:17:34.012 "name": "spare", 00:17:34.012 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:34.012 "is_configured": true, 00:17:34.012 "data_offset": 256, 00:17:34.012 "data_size": 7936 00:17:34.012 }, 00:17:34.012 { 00:17:34.012 "name": "BaseBdev2", 00:17:34.012 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:34.012 "is_configured": true, 00:17:34.012 "data_offset": 256, 00:17:34.012 "data_size": 7936 00:17:34.012 } 00:17:34.012 ] 00:17:34.012 }' 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.012 03:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.582 "name": "raid_bdev1", 00:17:34.582 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:34.582 "strip_size_kb": 0, 00:17:34.582 "state": "online", 00:17:34.582 "raid_level": "raid1", 00:17:34.582 "superblock": true, 00:17:34.582 "num_base_bdevs": 2, 00:17:34.582 "num_base_bdevs_discovered": 2, 00:17:34.582 "num_base_bdevs_operational": 2, 00:17:34.582 "base_bdevs_list": [ 00:17:34.582 { 00:17:34.582 "name": "spare", 00:17:34.582 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:34.582 "is_configured": true, 00:17:34.582 "data_offset": 256, 00:17:34.582 "data_size": 7936 00:17:34.582 }, 00:17:34.582 { 00:17:34.582 "name": "BaseBdev2", 00:17:34.582 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:34.582 "is_configured": true, 00:17:34.582 "data_offset": 256, 00:17:34.582 "data_size": 7936 00:17:34.582 } 00:17:34.582 ] 00:17:34.582 }' 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.582 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.842 [2024-11-20 03:24:24.244001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.842 "name": "raid_bdev1", 00:17:34.842 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:34.842 "strip_size_kb": 0, 00:17:34.842 "state": "online", 00:17:34.842 "raid_level": "raid1", 00:17:34.842 "superblock": true, 00:17:34.842 "num_base_bdevs": 2, 00:17:34.842 "num_base_bdevs_discovered": 1, 00:17:34.842 "num_base_bdevs_operational": 1, 00:17:34.842 "base_bdevs_list": [ 00:17:34.842 { 00:17:34.842 "name": null, 00:17:34.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.842 "is_configured": false, 00:17:34.842 "data_offset": 0, 00:17:34.842 "data_size": 7936 00:17:34.842 }, 00:17:34.842 { 00:17:34.842 "name": "BaseBdev2", 00:17:34.842 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:34.842 "is_configured": true, 00:17:34.842 "data_offset": 256, 00:17:34.842 "data_size": 7936 00:17:34.842 } 00:17:34.842 ] 00:17:34.842 }' 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.842 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.102 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.102 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.102 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.102 [2024-11-20 03:24:24.715262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.102 [2024-11-20 03:24:24.715507] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.102 [2024-11-20 03:24:24.715573] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.102 [2024-11-20 03:24:24.715644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.102 [2024-11-20 03:24:24.728998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:35.102 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.102 03:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:35.102 [2024-11-20 03:24:24.730863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.483 "name": "raid_bdev1", 00:17:36.483 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:36.483 "strip_size_kb": 0, 00:17:36.483 "state": "online", 00:17:36.483 "raid_level": "raid1", 00:17:36.483 "superblock": true, 00:17:36.483 "num_base_bdevs": 2, 00:17:36.483 "num_base_bdevs_discovered": 2, 00:17:36.483 "num_base_bdevs_operational": 2, 00:17:36.483 "process": { 00:17:36.483 "type": "rebuild", 00:17:36.483 "target": "spare", 00:17:36.483 "progress": { 00:17:36.483 "blocks": 2560, 00:17:36.483 "percent": 32 00:17:36.483 } 00:17:36.483 }, 00:17:36.483 "base_bdevs_list": [ 00:17:36.483 { 00:17:36.483 "name": "spare", 00:17:36.483 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:36.483 "is_configured": true, 00:17:36.483 "data_offset": 256, 00:17:36.483 "data_size": 7936 00:17:36.483 }, 00:17:36.483 { 00:17:36.483 "name": "BaseBdev2", 00:17:36.483 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:36.483 "is_configured": true, 00:17:36.483 "data_offset": 256, 00:17:36.483 "data_size": 7936 00:17:36.483 } 00:17:36.483 ] 00:17:36.483 }' 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.483 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.483 [2024-11-20 03:24:25.878940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.483 [2024-11-20 03:24:25.936586] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.483 [2024-11-20 03:24:25.936715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.483 [2024-11-20 03:24:25.936732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.484 [2024-11-20 03:24:25.936752] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.484 03:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.484 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.484 "name": "raid_bdev1", 00:17:36.484 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:36.484 "strip_size_kb": 0, 00:17:36.484 "state": "online", 00:17:36.484 "raid_level": "raid1", 00:17:36.484 "superblock": true, 00:17:36.484 "num_base_bdevs": 2, 00:17:36.484 "num_base_bdevs_discovered": 1, 00:17:36.484 "num_base_bdevs_operational": 1, 00:17:36.484 "base_bdevs_list": [ 00:17:36.484 { 00:17:36.484 "name": null, 00:17:36.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.484 "is_configured": false, 00:17:36.484 "data_offset": 0, 00:17:36.484 "data_size": 7936 00:17:36.484 }, 00:17:36.484 { 00:17:36.484 "name": "BaseBdev2", 00:17:36.484 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:36.484 "is_configured": true, 00:17:36.484 "data_offset": 256, 00:17:36.484 "data_size": 7936 00:17:36.484 } 00:17:36.484 ] 00:17:36.484 }' 00:17:36.484 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.484 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.053 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.053 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.054 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.054 [2024-11-20 03:24:26.414854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.054 [2024-11-20 03:24:26.414956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.054 [2024-11-20 03:24:26.415012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:37.054 [2024-11-20 03:24:26.415042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.054 [2024-11-20 03:24:26.415293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.054 [2024-11-20 03:24:26.415348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.054 [2024-11-20 03:24:26.415425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.054 [2024-11-20 03:24:26.415463] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.054 [2024-11-20 03:24:26.415501] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.054 [2024-11-20 03:24:26.415572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.054 [2024-11-20 03:24:26.428944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:37.054 spare 00:17:37.054 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.054 03:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:37.054 [2024-11-20 03:24:26.430804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.993 "name": "raid_bdev1", 00:17:37.993 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:37.993 "strip_size_kb": 0, 00:17:37.993 "state": "online", 00:17:37.993 "raid_level": "raid1", 00:17:37.993 "superblock": true, 00:17:37.993 "num_base_bdevs": 2, 00:17:37.993 "num_base_bdevs_discovered": 2, 00:17:37.993 "num_base_bdevs_operational": 2, 00:17:37.993 "process": { 00:17:37.993 "type": "rebuild", 00:17:37.993 "target": "spare", 00:17:37.993 "progress": { 00:17:37.993 "blocks": 2560, 00:17:37.993 "percent": 32 00:17:37.993 } 00:17:37.993 }, 00:17:37.993 "base_bdevs_list": [ 00:17:37.993 { 00:17:37.993 "name": "spare", 00:17:37.993 "uuid": "4db04b72-7791-56ae-99de-7a33299511cd", 00:17:37.993 "is_configured": true, 00:17:37.993 "data_offset": 256, 00:17:37.993 "data_size": 7936 00:17:37.993 }, 00:17:37.993 { 00:17:37.993 "name": "BaseBdev2", 00:17:37.993 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:37.993 "is_configured": true, 00:17:37.993 "data_offset": 256, 00:17:37.993 "data_size": 7936 00:17:37.993 } 00:17:37.993 ] 00:17:37.993 }' 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.993 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.993 [2024-11-20 03:24:27.590699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.253 [2024-11-20 03:24:27.635612] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.253 [2024-11-20 03:24:27.635755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.253 [2024-11-20 03:24:27.635809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.253 [2024-11-20 03:24:27.635828] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.253 "name": "raid_bdev1", 00:17:38.253 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:38.253 "strip_size_kb": 0, 00:17:38.253 "state": "online", 00:17:38.253 "raid_level": "raid1", 00:17:38.253 "superblock": true, 00:17:38.253 "num_base_bdevs": 2, 00:17:38.253 "num_base_bdevs_discovered": 1, 00:17:38.253 "num_base_bdevs_operational": 1, 00:17:38.253 "base_bdevs_list": [ 00:17:38.253 { 00:17:38.253 "name": null, 00:17:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.253 "is_configured": false, 00:17:38.253 "data_offset": 0, 00:17:38.253 "data_size": 7936 00:17:38.253 }, 00:17:38.253 { 00:17:38.253 "name": "BaseBdev2", 00:17:38.253 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:38.253 "is_configured": true, 00:17:38.253 "data_offset": 256, 00:17:38.253 "data_size": 7936 00:17:38.253 } 00:17:38.253 ] 00:17:38.253 }' 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.253 03:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.513 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.774 "name": "raid_bdev1", 00:17:38.774 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:38.774 "strip_size_kb": 0, 00:17:38.774 "state": "online", 00:17:38.774 "raid_level": "raid1", 00:17:38.774 "superblock": true, 00:17:38.774 "num_base_bdevs": 2, 00:17:38.774 "num_base_bdevs_discovered": 1, 00:17:38.774 "num_base_bdevs_operational": 1, 00:17:38.774 "base_bdevs_list": [ 00:17:38.774 { 00:17:38.774 "name": null, 00:17:38.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.774 "is_configured": false, 00:17:38.774 "data_offset": 0, 00:17:38.774 "data_size": 7936 00:17:38.774 }, 00:17:38.774 { 00:17:38.774 "name": "BaseBdev2", 00:17:38.774 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:38.774 "is_configured": true, 00:17:38.774 "data_offset": 256, 00:17:38.774 "data_size": 7936 00:17:38.774 } 00:17:38.774 ] 00:17:38.774 }' 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.774 [2024-11-20 03:24:28.282082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.774 [2024-11-20 03:24:28.282135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.774 [2024-11-20 03:24:28.282158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:38.774 [2024-11-20 03:24:28.282167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.774 [2024-11-20 03:24:28.282359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.774 [2024-11-20 03:24:28.282379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.774 [2024-11-20 03:24:28.282441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:38.774 [2024-11-20 03:24:28.282453] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.774 [2024-11-20 03:24:28.282462] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.774 [2024-11-20 03:24:28.282471] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:38.774 BaseBdev1 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.774 03:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.723 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.724 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.724 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.724 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.724 "name": "raid_bdev1", 00:17:39.724 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:39.724 "strip_size_kb": 0, 00:17:39.724 "state": "online", 00:17:39.724 "raid_level": "raid1", 00:17:39.724 "superblock": true, 00:17:39.724 "num_base_bdevs": 2, 00:17:39.724 "num_base_bdevs_discovered": 1, 00:17:39.724 "num_base_bdevs_operational": 1, 00:17:39.724 "base_bdevs_list": [ 00:17:39.724 { 00:17:39.724 "name": null, 00:17:39.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.724 "is_configured": false, 00:17:39.724 "data_offset": 0, 00:17:39.724 "data_size": 7936 00:17:39.724 }, 00:17:39.724 { 00:17:39.724 "name": "BaseBdev2", 00:17:39.724 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:39.724 "is_configured": true, 00:17:39.724 "data_offset": 256, 00:17:39.724 "data_size": 7936 00:17:39.724 } 00:17:39.724 ] 00:17:39.724 }' 00:17:39.724 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.724 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.297 "name": "raid_bdev1", 00:17:40.297 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:40.297 "strip_size_kb": 0, 00:17:40.297 "state": "online", 00:17:40.297 "raid_level": "raid1", 00:17:40.297 "superblock": true, 00:17:40.297 "num_base_bdevs": 2, 00:17:40.297 "num_base_bdevs_discovered": 1, 00:17:40.297 "num_base_bdevs_operational": 1, 00:17:40.297 "base_bdevs_list": [ 00:17:40.297 { 00:17:40.297 "name": null, 00:17:40.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.297 "is_configured": false, 00:17:40.297 "data_offset": 0, 00:17:40.297 "data_size": 7936 00:17:40.297 }, 00:17:40.297 { 00:17:40.297 "name": "BaseBdev2", 00:17:40.297 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:40.297 "is_configured": true, 00:17:40.297 "data_offset": 256, 00:17:40.297 "data_size": 7936 00:17:40.297 } 00:17:40.297 ] 00:17:40.297 }' 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.297 [2024-11-20 03:24:29.863391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.297 [2024-11-20 03:24:29.863555] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.297 [2024-11-20 03:24:29.863570] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:40.297 request: 00:17:40.297 { 00:17:40.297 "base_bdev": "BaseBdev1", 00:17:40.297 "raid_bdev": "raid_bdev1", 00:17:40.297 "method": "bdev_raid_add_base_bdev", 00:17:40.297 "req_id": 1 00:17:40.297 } 00:17:40.297 Got JSON-RPC error response 00:17:40.297 response: 00:17:40.297 { 00:17:40.297 "code": -22, 00:17:40.297 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:40.297 } 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.297 03:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.678 "name": "raid_bdev1", 00:17:41.678 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:41.678 "strip_size_kb": 0, 00:17:41.678 "state": "online", 00:17:41.678 "raid_level": "raid1", 00:17:41.678 "superblock": true, 00:17:41.678 "num_base_bdevs": 2, 00:17:41.678 "num_base_bdevs_discovered": 1, 00:17:41.678 "num_base_bdevs_operational": 1, 00:17:41.678 "base_bdevs_list": [ 00:17:41.678 { 00:17:41.678 "name": null, 00:17:41.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.678 "is_configured": false, 00:17:41.678 "data_offset": 0, 00:17:41.678 "data_size": 7936 00:17:41.678 }, 00:17:41.678 { 00:17:41.678 "name": "BaseBdev2", 00:17:41.678 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:41.678 "is_configured": true, 00:17:41.678 "data_offset": 256, 00:17:41.678 "data_size": 7936 00:17:41.678 } 00:17:41.678 ] 00:17:41.678 }' 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.678 03:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.938 "name": "raid_bdev1", 00:17:41.938 "uuid": "7b0d80ee-83c0-44d3-bc30-96e93f67eb90", 00:17:41.938 "strip_size_kb": 0, 00:17:41.938 "state": "online", 00:17:41.938 "raid_level": "raid1", 00:17:41.938 "superblock": true, 00:17:41.938 "num_base_bdevs": 2, 00:17:41.938 "num_base_bdevs_discovered": 1, 00:17:41.938 "num_base_bdevs_operational": 1, 00:17:41.938 "base_bdevs_list": [ 00:17:41.938 { 00:17:41.938 "name": null, 00:17:41.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.938 "is_configured": false, 00:17:41.938 "data_offset": 0, 00:17:41.938 "data_size": 7936 00:17:41.938 }, 00:17:41.938 { 00:17:41.938 "name": "BaseBdev2", 00:17:41.938 "uuid": "288fe687-6634-5670-b810-49ae696f8841", 00:17:41.938 "is_configured": true, 00:17:41.938 "data_offset": 256, 00:17:41.938 "data_size": 7936 00:17:41.938 } 00:17:41.938 ] 00:17:41.938 }' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87579 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87579 ']' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87579 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87579 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.938 killing process with pid 87579 00:17:41.938 Received shutdown signal, test time was about 60.000000 seconds 00:17:41.938 00:17:41.938 Latency(us) 00:17:41.938 [2024-11-20T03:24:31.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.938 [2024-11-20T03:24:31.573Z] =================================================================================================================== 00:17:41.938 [2024-11-20T03:24:31.573Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87579' 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87579 00:17:41.938 [2024-11-20 03:24:31.520232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:41.938 [2024-11-20 03:24:31.520353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.938 03:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87579 00:17:41.938 [2024-11-20 03:24:31.520401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.938 [2024-11-20 03:24:31.520412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:42.198 [2024-11-20 03:24:31.827825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.579 03:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:43.579 00:17:43.579 real 0m19.677s 00:17:43.579 user 0m25.476s 00:17:43.579 sys 0m2.713s 00:17:43.579 ************************************ 00:17:43.579 END TEST raid_rebuild_test_sb_md_separate 00:17:43.579 ************************************ 00:17:43.579 03:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.579 03:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.579 03:24:32 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:43.579 03:24:32 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:43.579 03:24:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:43.579 03:24:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.579 03:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.579 ************************************ 00:17:43.579 START TEST raid_state_function_test_sb_md_interleaved 00:17:43.579 ************************************ 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88266 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88266' 00:17:43.579 Process raid pid: 88266 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88266 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88266 ']' 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.579 03:24:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.579 [2024-11-20 03:24:33.045630] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:43.579 [2024-11-20 03:24:33.045863] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.579 [2024-11-20 03:24:33.209820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.838 [2024-11-20 03:24:33.319096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.099 [2024-11-20 03:24:33.518818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.099 [2024-11-20 03:24:33.518943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 [2024-11-20 03:24:33.863317] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.358 [2024-11-20 03:24:33.863373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.358 [2024-11-20 03:24:33.863399] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.358 [2024-11-20 03:24:33.863408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.358 "name": "Existed_Raid", 00:17:44.358 "uuid": "93390cba-d4ce-4ae9-909c-e7088f114474", 00:17:44.358 "strip_size_kb": 0, 00:17:44.358 "state": "configuring", 00:17:44.358 "raid_level": "raid1", 00:17:44.358 "superblock": true, 00:17:44.358 "num_base_bdevs": 2, 00:17:44.358 "num_base_bdevs_discovered": 0, 00:17:44.358 "num_base_bdevs_operational": 2, 00:17:44.358 "base_bdevs_list": [ 00:17:44.358 { 00:17:44.358 "name": "BaseBdev1", 00:17:44.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.358 "is_configured": false, 00:17:44.358 "data_offset": 0, 00:17:44.358 "data_size": 0 00:17:44.358 }, 00:17:44.358 { 00:17:44.358 "name": "BaseBdev2", 00:17:44.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.358 "is_configured": false, 00:17:44.358 "data_offset": 0, 00:17:44.358 "data_size": 0 00:17:44.358 } 00:17:44.358 ] 00:17:44.358 }' 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.358 03:24:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.928 [2024-11-20 03:24:34.334536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.928 [2024-11-20 03:24:34.334568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.928 [2024-11-20 03:24:34.346521] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.928 [2024-11-20 03:24:34.346565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.928 [2024-11-20 03:24:34.346574] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.928 [2024-11-20 03:24:34.346584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.928 [2024-11-20 03:24:34.394388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.928 BaseBdev1 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.928 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.929 [ 00:17:44.929 { 00:17:44.929 "name": "BaseBdev1", 00:17:44.929 "aliases": [ 00:17:44.929 "8c2a7cee-cae2-4a4a-b956-383f1b0e60da" 00:17:44.929 ], 00:17:44.929 "product_name": "Malloc disk", 00:17:44.929 "block_size": 4128, 00:17:44.929 "num_blocks": 8192, 00:17:44.929 "uuid": "8c2a7cee-cae2-4a4a-b956-383f1b0e60da", 00:17:44.929 "md_size": 32, 00:17:44.929 "md_interleave": true, 00:17:44.929 "dif_type": 0, 00:17:44.929 "assigned_rate_limits": { 00:17:44.929 "rw_ios_per_sec": 0, 00:17:44.929 "rw_mbytes_per_sec": 0, 00:17:44.929 "r_mbytes_per_sec": 0, 00:17:44.929 "w_mbytes_per_sec": 0 00:17:44.929 }, 00:17:44.929 "claimed": true, 00:17:44.929 "claim_type": "exclusive_write", 00:17:44.929 "zoned": false, 00:17:44.929 "supported_io_types": { 00:17:44.929 "read": true, 00:17:44.929 "write": true, 00:17:44.929 "unmap": true, 00:17:44.929 "flush": true, 00:17:44.929 "reset": true, 00:17:44.929 "nvme_admin": false, 00:17:44.929 "nvme_io": false, 00:17:44.929 "nvme_io_md": false, 00:17:44.929 "write_zeroes": true, 00:17:44.929 "zcopy": true, 00:17:44.929 "get_zone_info": false, 00:17:44.929 "zone_management": false, 00:17:44.929 "zone_append": false, 00:17:44.929 "compare": false, 00:17:44.929 "compare_and_write": false, 00:17:44.929 "abort": true, 00:17:44.929 "seek_hole": false, 00:17:44.929 "seek_data": false, 00:17:44.929 "copy": true, 00:17:44.929 "nvme_iov_md": false 00:17:44.929 }, 00:17:44.929 "memory_domains": [ 00:17:44.929 { 00:17:44.929 "dma_device_id": "system", 00:17:44.929 "dma_device_type": 1 00:17:44.929 }, 00:17:44.929 { 00:17:44.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.929 "dma_device_type": 2 00:17:44.929 } 00:17:44.929 ], 00:17:44.929 "driver_specific": {} 00:17:44.929 } 00:17:44.929 ] 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.929 "name": "Existed_Raid", 00:17:44.929 "uuid": "4b45b78a-708e-456c-a651-f496043cfe14", 00:17:44.929 "strip_size_kb": 0, 00:17:44.929 "state": "configuring", 00:17:44.929 "raid_level": "raid1", 00:17:44.929 "superblock": true, 00:17:44.929 "num_base_bdevs": 2, 00:17:44.929 "num_base_bdevs_discovered": 1, 00:17:44.929 "num_base_bdevs_operational": 2, 00:17:44.929 "base_bdevs_list": [ 00:17:44.929 { 00:17:44.929 "name": "BaseBdev1", 00:17:44.929 "uuid": "8c2a7cee-cae2-4a4a-b956-383f1b0e60da", 00:17:44.929 "is_configured": true, 00:17:44.929 "data_offset": 256, 00:17:44.929 "data_size": 7936 00:17:44.929 }, 00:17:44.929 { 00:17:44.929 "name": "BaseBdev2", 00:17:44.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.929 "is_configured": false, 00:17:44.929 "data_offset": 0, 00:17:44.929 "data_size": 0 00:17:44.929 } 00:17:44.929 ] 00:17:44.929 }' 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.929 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 [2024-11-20 03:24:34.889567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.500 [2024-11-20 03:24:34.889671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 [2024-11-20 03:24:34.901605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.500 [2024-11-20 03:24:34.903378] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.500 [2024-11-20 03:24:34.903452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.500 "name": "Existed_Raid", 00:17:45.500 "uuid": "0dc2f4e7-3292-4967-96d3-9af512e4aef2", 00:17:45.500 "strip_size_kb": 0, 00:17:45.500 "state": "configuring", 00:17:45.500 "raid_level": "raid1", 00:17:45.500 "superblock": true, 00:17:45.500 "num_base_bdevs": 2, 00:17:45.500 "num_base_bdevs_discovered": 1, 00:17:45.500 "num_base_bdevs_operational": 2, 00:17:45.500 "base_bdevs_list": [ 00:17:45.500 { 00:17:45.500 "name": "BaseBdev1", 00:17:45.500 "uuid": "8c2a7cee-cae2-4a4a-b956-383f1b0e60da", 00:17:45.500 "is_configured": true, 00:17:45.500 "data_offset": 256, 00:17:45.500 "data_size": 7936 00:17:45.500 }, 00:17:45.500 { 00:17:45.500 "name": "BaseBdev2", 00:17:45.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.500 "is_configured": false, 00:17:45.500 "data_offset": 0, 00:17:45.500 "data_size": 0 00:17:45.500 } 00:17:45.500 ] 00:17:45.500 }' 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.500 03:24:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.760 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:45.760 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.760 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.760 [2024-11-20 03:24:35.391763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.760 [2024-11-20 03:24:35.392046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:45.760 [2024-11-20 03:24:35.392065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:45.760 [2024-11-20 03:24:35.392163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:45.760 [2024-11-20 03:24:35.392252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:45.760 [2024-11-20 03:24:35.392264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:45.760 [2024-11-20 03:24:35.392325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.021 BaseBdev2 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.021 [ 00:17:46.021 { 00:17:46.021 "name": "BaseBdev2", 00:17:46.021 "aliases": [ 00:17:46.021 "212c8dc8-ed0b-41d2-b0d6-cf493ff70f8a" 00:17:46.021 ], 00:17:46.021 "product_name": "Malloc disk", 00:17:46.021 "block_size": 4128, 00:17:46.021 "num_blocks": 8192, 00:17:46.021 "uuid": "212c8dc8-ed0b-41d2-b0d6-cf493ff70f8a", 00:17:46.021 "md_size": 32, 00:17:46.021 "md_interleave": true, 00:17:46.021 "dif_type": 0, 00:17:46.021 "assigned_rate_limits": { 00:17:46.021 "rw_ios_per_sec": 0, 00:17:46.021 "rw_mbytes_per_sec": 0, 00:17:46.021 "r_mbytes_per_sec": 0, 00:17:46.021 "w_mbytes_per_sec": 0 00:17:46.021 }, 00:17:46.021 "claimed": true, 00:17:46.021 "claim_type": "exclusive_write", 00:17:46.021 "zoned": false, 00:17:46.021 "supported_io_types": { 00:17:46.021 "read": true, 00:17:46.021 "write": true, 00:17:46.021 "unmap": true, 00:17:46.021 "flush": true, 00:17:46.021 "reset": true, 00:17:46.021 "nvme_admin": false, 00:17:46.021 "nvme_io": false, 00:17:46.021 "nvme_io_md": false, 00:17:46.021 "write_zeroes": true, 00:17:46.021 "zcopy": true, 00:17:46.021 "get_zone_info": false, 00:17:46.021 "zone_management": false, 00:17:46.021 "zone_append": false, 00:17:46.021 "compare": false, 00:17:46.021 "compare_and_write": false, 00:17:46.021 "abort": true, 00:17:46.021 "seek_hole": false, 00:17:46.021 "seek_data": false, 00:17:46.021 "copy": true, 00:17:46.021 "nvme_iov_md": false 00:17:46.021 }, 00:17:46.021 "memory_domains": [ 00:17:46.021 { 00:17:46.021 "dma_device_id": "system", 00:17:46.021 "dma_device_type": 1 00:17:46.021 }, 00:17:46.021 { 00:17:46.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.021 "dma_device_type": 2 00:17:46.021 } 00:17:46.021 ], 00:17:46.021 "driver_specific": {} 00:17:46.021 } 00:17:46.021 ] 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.021 "name": "Existed_Raid", 00:17:46.021 "uuid": "0dc2f4e7-3292-4967-96d3-9af512e4aef2", 00:17:46.021 "strip_size_kb": 0, 00:17:46.021 "state": "online", 00:17:46.021 "raid_level": "raid1", 00:17:46.021 "superblock": true, 00:17:46.021 "num_base_bdevs": 2, 00:17:46.021 "num_base_bdevs_discovered": 2, 00:17:46.021 "num_base_bdevs_operational": 2, 00:17:46.021 "base_bdevs_list": [ 00:17:46.021 { 00:17:46.021 "name": "BaseBdev1", 00:17:46.021 "uuid": "8c2a7cee-cae2-4a4a-b956-383f1b0e60da", 00:17:46.021 "is_configured": true, 00:17:46.021 "data_offset": 256, 00:17:46.021 "data_size": 7936 00:17:46.021 }, 00:17:46.021 { 00:17:46.021 "name": "BaseBdev2", 00:17:46.021 "uuid": "212c8dc8-ed0b-41d2-b0d6-cf493ff70f8a", 00:17:46.021 "is_configured": true, 00:17:46.021 "data_offset": 256, 00:17:46.021 "data_size": 7936 00:17:46.021 } 00:17:46.021 ] 00:17:46.021 }' 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.021 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.282 [2024-11-20 03:24:35.863245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.282 "name": "Existed_Raid", 00:17:46.282 "aliases": [ 00:17:46.282 "0dc2f4e7-3292-4967-96d3-9af512e4aef2" 00:17:46.282 ], 00:17:46.282 "product_name": "Raid Volume", 00:17:46.282 "block_size": 4128, 00:17:46.282 "num_blocks": 7936, 00:17:46.282 "uuid": "0dc2f4e7-3292-4967-96d3-9af512e4aef2", 00:17:46.282 "md_size": 32, 00:17:46.282 "md_interleave": true, 00:17:46.282 "dif_type": 0, 00:17:46.282 "assigned_rate_limits": { 00:17:46.282 "rw_ios_per_sec": 0, 00:17:46.282 "rw_mbytes_per_sec": 0, 00:17:46.282 "r_mbytes_per_sec": 0, 00:17:46.282 "w_mbytes_per_sec": 0 00:17:46.282 }, 00:17:46.282 "claimed": false, 00:17:46.282 "zoned": false, 00:17:46.282 "supported_io_types": { 00:17:46.282 "read": true, 00:17:46.282 "write": true, 00:17:46.282 "unmap": false, 00:17:46.282 "flush": false, 00:17:46.282 "reset": true, 00:17:46.282 "nvme_admin": false, 00:17:46.282 "nvme_io": false, 00:17:46.282 "nvme_io_md": false, 00:17:46.282 "write_zeroes": true, 00:17:46.282 "zcopy": false, 00:17:46.282 "get_zone_info": false, 00:17:46.282 "zone_management": false, 00:17:46.282 "zone_append": false, 00:17:46.282 "compare": false, 00:17:46.282 "compare_and_write": false, 00:17:46.282 "abort": false, 00:17:46.282 "seek_hole": false, 00:17:46.282 "seek_data": false, 00:17:46.282 "copy": false, 00:17:46.282 "nvme_iov_md": false 00:17:46.282 }, 00:17:46.282 "memory_domains": [ 00:17:46.282 { 00:17:46.282 "dma_device_id": "system", 00:17:46.282 "dma_device_type": 1 00:17:46.282 }, 00:17:46.282 { 00:17:46.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.282 "dma_device_type": 2 00:17:46.282 }, 00:17:46.282 { 00:17:46.282 "dma_device_id": "system", 00:17:46.282 "dma_device_type": 1 00:17:46.282 }, 00:17:46.282 { 00:17:46.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.282 "dma_device_type": 2 00:17:46.282 } 00:17:46.282 ], 00:17:46.282 "driver_specific": { 00:17:46.282 "raid": { 00:17:46.282 "uuid": "0dc2f4e7-3292-4967-96d3-9af512e4aef2", 00:17:46.282 "strip_size_kb": 0, 00:17:46.282 "state": "online", 00:17:46.282 "raid_level": "raid1", 00:17:46.282 "superblock": true, 00:17:46.282 "num_base_bdevs": 2, 00:17:46.282 "num_base_bdevs_discovered": 2, 00:17:46.282 "num_base_bdevs_operational": 2, 00:17:46.282 "base_bdevs_list": [ 00:17:46.282 { 00:17:46.282 "name": "BaseBdev1", 00:17:46.282 "uuid": "8c2a7cee-cae2-4a4a-b956-383f1b0e60da", 00:17:46.282 "is_configured": true, 00:17:46.282 "data_offset": 256, 00:17:46.282 "data_size": 7936 00:17:46.282 }, 00:17:46.282 { 00:17:46.282 "name": "BaseBdev2", 00:17:46.282 "uuid": "212c8dc8-ed0b-41d2-b0d6-cf493ff70f8a", 00:17:46.282 "is_configured": true, 00:17:46.282 "data_offset": 256, 00:17:46.282 "data_size": 7936 00:17:46.282 } 00:17:46.282 ] 00:17:46.282 } 00:17:46.282 } 00:17:46.282 }' 00:17:46.282 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:46.543 BaseBdev2' 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.543 03:24:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.543 [2024-11-20 03:24:36.070718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.543 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.804 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.804 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.804 "name": "Existed_Raid", 00:17:46.804 "uuid": "0dc2f4e7-3292-4967-96d3-9af512e4aef2", 00:17:46.804 "strip_size_kb": 0, 00:17:46.804 "state": "online", 00:17:46.804 "raid_level": "raid1", 00:17:46.804 "superblock": true, 00:17:46.804 "num_base_bdevs": 2, 00:17:46.804 "num_base_bdevs_discovered": 1, 00:17:46.804 "num_base_bdevs_operational": 1, 00:17:46.804 "base_bdevs_list": [ 00:17:46.804 { 00:17:46.804 "name": null, 00:17:46.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.804 "is_configured": false, 00:17:46.804 "data_offset": 0, 00:17:46.804 "data_size": 7936 00:17:46.804 }, 00:17:46.804 { 00:17:46.804 "name": "BaseBdev2", 00:17:46.804 "uuid": "212c8dc8-ed0b-41d2-b0d6-cf493ff70f8a", 00:17:46.804 "is_configured": true, 00:17:46.804 "data_offset": 256, 00:17:46.804 "data_size": 7936 00:17:46.804 } 00:17:46.804 ] 00:17:46.804 }' 00:17:46.804 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.804 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.064 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.064 [2024-11-20 03:24:36.690726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.064 [2024-11-20 03:24:36.690834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.325 [2024-11-20 03:24:36.782481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.325 [2024-11-20 03:24:36.782531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.325 [2024-11-20 03:24:36.782542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88266 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88266 ']' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88266 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88266 00:17:47.325 killing process with pid 88266 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88266' 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88266 00:17:47.325 [2024-11-20 03:24:36.882069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.325 03:24:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88266 00:17:47.325 [2024-11-20 03:24:36.898877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.707 ************************************ 00:17:48.707 END TEST raid_state_function_test_sb_md_interleaved 00:17:48.708 03:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:48.708 00:17:48.708 real 0m5.003s 00:17:48.708 user 0m7.269s 00:17:48.708 sys 0m0.868s 00:17:48.708 03:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.708 03:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.708 ************************************ 00:17:48.708 03:24:38 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:48.708 03:24:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:48.708 03:24:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.708 03:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.708 ************************************ 00:17:48.708 START TEST raid_superblock_test_md_interleaved 00:17:48.708 ************************************ 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88518 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88518 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88518 ']' 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.708 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.708 [2024-11-20 03:24:38.127610] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:48.708 [2024-11-20 03:24:38.127874] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88518 ] 00:17:48.708 [2024-11-20 03:24:38.307179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.968 [2024-11-20 03:24:38.418119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.228 [2024-11-20 03:24:38.619225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.228 [2024-11-20 03:24:38.619352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.488 malloc1 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.488 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.488 [2024-11-20 03:24:38.984325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.488 [2024-11-20 03:24:38.984379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.488 [2024-11-20 03:24:38.984400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:49.489 [2024-11-20 03:24:38.984409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.489 [2024-11-20 03:24:38.986165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.489 [2024-11-20 03:24:38.986199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.489 pt1 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.489 03:24:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.489 malloc2 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.489 [2024-11-20 03:24:39.038171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.489 [2024-11-20 03:24:39.038262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.489 [2024-11-20 03:24:39.038298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.489 [2024-11-20 03:24:39.038323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.489 [2024-11-20 03:24:39.040134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.489 [2024-11-20 03:24:39.040218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.489 pt2 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.489 [2024-11-20 03:24:39.050188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:49.489 [2024-11-20 03:24:39.051943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.489 [2024-11-20 03:24:39.052178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:49.489 [2024-11-20 03:24:39.052223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:49.489 [2024-11-20 03:24:39.052309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:49.489 [2024-11-20 03:24:39.052404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:49.489 [2024-11-20 03:24:39.052448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:49.489 [2024-11-20 03:24:39.052550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.489 "name": "raid_bdev1", 00:17:49.489 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:49.489 "strip_size_kb": 0, 00:17:49.489 "state": "online", 00:17:49.489 "raid_level": "raid1", 00:17:49.489 "superblock": true, 00:17:49.489 "num_base_bdevs": 2, 00:17:49.489 "num_base_bdevs_discovered": 2, 00:17:49.489 "num_base_bdevs_operational": 2, 00:17:49.489 "base_bdevs_list": [ 00:17:49.489 { 00:17:49.489 "name": "pt1", 00:17:49.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.489 "is_configured": true, 00:17:49.489 "data_offset": 256, 00:17:49.489 "data_size": 7936 00:17:49.489 }, 00:17:49.489 { 00:17:49.489 "name": "pt2", 00:17:49.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.489 "is_configured": true, 00:17:49.489 "data_offset": 256, 00:17:49.489 "data_size": 7936 00:17:49.489 } 00:17:49.489 ] 00:17:49.489 }' 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.489 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.058 [2024-11-20 03:24:39.521584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.058 "name": "raid_bdev1", 00:17:50.058 "aliases": [ 00:17:50.058 "b03adf20-9cec-405a-8b16-97aeaddf4867" 00:17:50.058 ], 00:17:50.058 "product_name": "Raid Volume", 00:17:50.058 "block_size": 4128, 00:17:50.058 "num_blocks": 7936, 00:17:50.058 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:50.058 "md_size": 32, 00:17:50.058 "md_interleave": true, 00:17:50.058 "dif_type": 0, 00:17:50.058 "assigned_rate_limits": { 00:17:50.058 "rw_ios_per_sec": 0, 00:17:50.058 "rw_mbytes_per_sec": 0, 00:17:50.058 "r_mbytes_per_sec": 0, 00:17:50.058 "w_mbytes_per_sec": 0 00:17:50.058 }, 00:17:50.058 "claimed": false, 00:17:50.058 "zoned": false, 00:17:50.058 "supported_io_types": { 00:17:50.058 "read": true, 00:17:50.058 "write": true, 00:17:50.058 "unmap": false, 00:17:50.058 "flush": false, 00:17:50.058 "reset": true, 00:17:50.058 "nvme_admin": false, 00:17:50.058 "nvme_io": false, 00:17:50.058 "nvme_io_md": false, 00:17:50.058 "write_zeroes": true, 00:17:50.058 "zcopy": false, 00:17:50.058 "get_zone_info": false, 00:17:50.058 "zone_management": false, 00:17:50.058 "zone_append": false, 00:17:50.058 "compare": false, 00:17:50.058 "compare_and_write": false, 00:17:50.058 "abort": false, 00:17:50.058 "seek_hole": false, 00:17:50.058 "seek_data": false, 00:17:50.058 "copy": false, 00:17:50.058 "nvme_iov_md": false 00:17:50.058 }, 00:17:50.058 "memory_domains": [ 00:17:50.058 { 00:17:50.058 "dma_device_id": "system", 00:17:50.058 "dma_device_type": 1 00:17:50.058 }, 00:17:50.058 { 00:17:50.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.058 "dma_device_type": 2 00:17:50.058 }, 00:17:50.058 { 00:17:50.058 "dma_device_id": "system", 00:17:50.058 "dma_device_type": 1 00:17:50.058 }, 00:17:50.058 { 00:17:50.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.058 "dma_device_type": 2 00:17:50.058 } 00:17:50.058 ], 00:17:50.058 "driver_specific": { 00:17:50.058 "raid": { 00:17:50.058 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:50.058 "strip_size_kb": 0, 00:17:50.058 "state": "online", 00:17:50.058 "raid_level": "raid1", 00:17:50.058 "superblock": true, 00:17:50.058 "num_base_bdevs": 2, 00:17:50.058 "num_base_bdevs_discovered": 2, 00:17:50.058 "num_base_bdevs_operational": 2, 00:17:50.058 "base_bdevs_list": [ 00:17:50.058 { 00:17:50.058 "name": "pt1", 00:17:50.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.058 "is_configured": true, 00:17:50.058 "data_offset": 256, 00:17:50.058 "data_size": 7936 00:17:50.058 }, 00:17:50.058 { 00:17:50.058 "name": "pt2", 00:17:50.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.058 "is_configured": true, 00:17:50.058 "data_offset": 256, 00:17:50.058 "data_size": 7936 00:17:50.058 } 00:17:50.058 ] 00:17:50.058 } 00:17:50.058 } 00:17:50.058 }' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.058 pt2' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.058 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 [2024-11-20 03:24:39.729183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b03adf20-9cec-405a-8b16-97aeaddf4867 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b03adf20-9cec-405a-8b16-97aeaddf4867 ']' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 [2024-11-20 03:24:39.760886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.319 [2024-11-20 03:24:39.760910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.319 [2024-11-20 03:24:39.760981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.319 [2024-11-20 03:24:39.761026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.319 [2024-11-20 03:24:39.761053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.319 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.319 [2024-11-20 03:24:39.896718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:50.319 [2024-11-20 03:24:39.898482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:50.319 [2024-11-20 03:24:39.898553] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:50.319 [2024-11-20 03:24:39.898615] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:50.319 [2024-11-20 03:24:39.898640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.319 [2024-11-20 03:24:39.898650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:50.320 request: 00:17:50.320 { 00:17:50.320 "name": "raid_bdev1", 00:17:50.320 "raid_level": "raid1", 00:17:50.320 "base_bdevs": [ 00:17:50.320 "malloc1", 00:17:50.320 "malloc2" 00:17:50.320 ], 00:17:50.320 "superblock": false, 00:17:50.320 "method": "bdev_raid_create", 00:17:50.320 "req_id": 1 00:17:50.320 } 00:17:50.320 Got JSON-RPC error response 00:17:50.320 response: 00:17:50.320 { 00:17:50.320 "code": -17, 00:17:50.320 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:50.320 } 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:50.320 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.580 [2024-11-20 03:24:39.960562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.580 [2024-11-20 03:24:39.960622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.580 [2024-11-20 03:24:39.960637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:50.580 [2024-11-20 03:24:39.960649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.580 [2024-11-20 03:24:39.962516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.580 [2024-11-20 03:24:39.962555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.580 [2024-11-20 03:24:39.962598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:50.580 [2024-11-20 03:24:39.962693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.580 pt1 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.580 03:24:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.580 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.580 "name": "raid_bdev1", 00:17:50.580 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:50.580 "strip_size_kb": 0, 00:17:50.580 "state": "configuring", 00:17:50.580 "raid_level": "raid1", 00:17:50.580 "superblock": true, 00:17:50.580 "num_base_bdevs": 2, 00:17:50.580 "num_base_bdevs_discovered": 1, 00:17:50.580 "num_base_bdevs_operational": 2, 00:17:50.580 "base_bdevs_list": [ 00:17:50.580 { 00:17:50.580 "name": "pt1", 00:17:50.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.580 "is_configured": true, 00:17:50.580 "data_offset": 256, 00:17:50.580 "data_size": 7936 00:17:50.580 }, 00:17:50.580 { 00:17:50.580 "name": null, 00:17:50.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.580 "is_configured": false, 00:17:50.580 "data_offset": 256, 00:17:50.580 "data_size": 7936 00:17:50.580 } 00:17:50.580 ] 00:17:50.580 }' 00:17:50.580 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.580 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.841 [2024-11-20 03:24:40.395802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.841 [2024-11-20 03:24:40.395859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.841 [2024-11-20 03:24:40.395876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:50.841 [2024-11-20 03:24:40.395886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.841 [2024-11-20 03:24:40.396015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.841 [2024-11-20 03:24:40.396036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.841 [2024-11-20 03:24:40.396072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.841 [2024-11-20 03:24:40.396092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.841 [2024-11-20 03:24:40.396166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:50.841 [2024-11-20 03:24:40.396181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:50.841 [2024-11-20 03:24:40.396245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:50.841 [2024-11-20 03:24:40.396318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:50.841 [2024-11-20 03:24:40.396330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:50.841 [2024-11-20 03:24:40.396384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.841 pt2 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.841 "name": "raid_bdev1", 00:17:50.841 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:50.841 "strip_size_kb": 0, 00:17:50.841 "state": "online", 00:17:50.841 "raid_level": "raid1", 00:17:50.841 "superblock": true, 00:17:50.841 "num_base_bdevs": 2, 00:17:50.841 "num_base_bdevs_discovered": 2, 00:17:50.841 "num_base_bdevs_operational": 2, 00:17:50.841 "base_bdevs_list": [ 00:17:50.841 { 00:17:50.841 "name": "pt1", 00:17:50.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.841 "is_configured": true, 00:17:50.841 "data_offset": 256, 00:17:50.841 "data_size": 7936 00:17:50.841 }, 00:17:50.841 { 00:17:50.841 "name": "pt2", 00:17:50.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.841 "is_configured": true, 00:17:50.841 "data_offset": 256, 00:17:50.841 "data_size": 7936 00:17:50.841 } 00:17:50.841 ] 00:17:50.841 }' 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.841 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.411 [2024-11-20 03:24:40.859215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.411 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.411 "name": "raid_bdev1", 00:17:51.411 "aliases": [ 00:17:51.411 "b03adf20-9cec-405a-8b16-97aeaddf4867" 00:17:51.411 ], 00:17:51.412 "product_name": "Raid Volume", 00:17:51.412 "block_size": 4128, 00:17:51.412 "num_blocks": 7936, 00:17:51.412 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:51.412 "md_size": 32, 00:17:51.412 "md_interleave": true, 00:17:51.412 "dif_type": 0, 00:17:51.412 "assigned_rate_limits": { 00:17:51.412 "rw_ios_per_sec": 0, 00:17:51.412 "rw_mbytes_per_sec": 0, 00:17:51.412 "r_mbytes_per_sec": 0, 00:17:51.412 "w_mbytes_per_sec": 0 00:17:51.412 }, 00:17:51.412 "claimed": false, 00:17:51.412 "zoned": false, 00:17:51.412 "supported_io_types": { 00:17:51.412 "read": true, 00:17:51.412 "write": true, 00:17:51.412 "unmap": false, 00:17:51.412 "flush": false, 00:17:51.412 "reset": true, 00:17:51.412 "nvme_admin": false, 00:17:51.412 "nvme_io": false, 00:17:51.412 "nvme_io_md": false, 00:17:51.412 "write_zeroes": true, 00:17:51.412 "zcopy": false, 00:17:51.412 "get_zone_info": false, 00:17:51.412 "zone_management": false, 00:17:51.412 "zone_append": false, 00:17:51.412 "compare": false, 00:17:51.412 "compare_and_write": false, 00:17:51.412 "abort": false, 00:17:51.412 "seek_hole": false, 00:17:51.412 "seek_data": false, 00:17:51.412 "copy": false, 00:17:51.412 "nvme_iov_md": false 00:17:51.412 }, 00:17:51.412 "memory_domains": [ 00:17:51.412 { 00:17:51.412 "dma_device_id": "system", 00:17:51.412 "dma_device_type": 1 00:17:51.412 }, 00:17:51.412 { 00:17:51.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.412 "dma_device_type": 2 00:17:51.412 }, 00:17:51.412 { 00:17:51.412 "dma_device_id": "system", 00:17:51.412 "dma_device_type": 1 00:17:51.412 }, 00:17:51.412 { 00:17:51.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.412 "dma_device_type": 2 00:17:51.412 } 00:17:51.412 ], 00:17:51.412 "driver_specific": { 00:17:51.412 "raid": { 00:17:51.412 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:51.412 "strip_size_kb": 0, 00:17:51.412 "state": "online", 00:17:51.412 "raid_level": "raid1", 00:17:51.412 "superblock": true, 00:17:51.412 "num_base_bdevs": 2, 00:17:51.412 "num_base_bdevs_discovered": 2, 00:17:51.412 "num_base_bdevs_operational": 2, 00:17:51.412 "base_bdevs_list": [ 00:17:51.412 { 00:17:51.412 "name": "pt1", 00:17:51.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.412 "is_configured": true, 00:17:51.412 "data_offset": 256, 00:17:51.412 "data_size": 7936 00:17:51.412 }, 00:17:51.412 { 00:17:51.412 "name": "pt2", 00:17:51.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.412 "is_configured": true, 00:17:51.412 "data_offset": 256, 00:17:51.412 "data_size": 7936 00:17:51.412 } 00:17:51.412 ] 00:17:51.412 } 00:17:51.412 } 00:17:51.412 }' 00:17:51.412 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.412 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:51.412 pt2' 00:17:51.412 03:24:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.412 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.672 [2024-11-20 03:24:41.098875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b03adf20-9cec-405a-8b16-97aeaddf4867 '!=' b03adf20-9cec-405a-8b16-97aeaddf4867 ']' 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.672 [2024-11-20 03:24:41.126607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.672 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.672 "name": "raid_bdev1", 00:17:51.672 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:51.672 "strip_size_kb": 0, 00:17:51.672 "state": "online", 00:17:51.672 "raid_level": "raid1", 00:17:51.672 "superblock": true, 00:17:51.672 "num_base_bdevs": 2, 00:17:51.672 "num_base_bdevs_discovered": 1, 00:17:51.672 "num_base_bdevs_operational": 1, 00:17:51.672 "base_bdevs_list": [ 00:17:51.672 { 00:17:51.672 "name": null, 00:17:51.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.673 "is_configured": false, 00:17:51.673 "data_offset": 0, 00:17:51.673 "data_size": 7936 00:17:51.673 }, 00:17:51.673 { 00:17:51.673 "name": "pt2", 00:17:51.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.673 "is_configured": true, 00:17:51.673 "data_offset": 256, 00:17:51.673 "data_size": 7936 00:17:51.673 } 00:17:51.673 ] 00:17:51.673 }' 00:17:51.673 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.673 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.932 [2024-11-20 03:24:41.509927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.932 [2024-11-20 03:24:41.509953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.932 [2024-11-20 03:24:41.510001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.932 [2024-11-20 03:24:41.510034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.932 [2024-11-20 03:24:41.510045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:51.932 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.192 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.193 [2024-11-20 03:24:41.585810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.193 [2024-11-20 03:24:41.585855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.193 [2024-11-20 03:24:41.585870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:52.193 [2024-11-20 03:24:41.585879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.193 [2024-11-20 03:24:41.587777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.193 [2024-11-20 03:24:41.587831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.193 [2024-11-20 03:24:41.587872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:52.193 [2024-11-20 03:24:41.587920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.193 [2024-11-20 03:24:41.587972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.193 [2024-11-20 03:24:41.587983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:52.193 [2024-11-20 03:24:41.588059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:52.193 [2024-11-20 03:24:41.588120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.193 [2024-11-20 03:24:41.588147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:52.193 [2024-11-20 03:24:41.588216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.193 pt2 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.193 "name": "raid_bdev1", 00:17:52.193 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:52.193 "strip_size_kb": 0, 00:17:52.193 "state": "online", 00:17:52.193 "raid_level": "raid1", 00:17:52.193 "superblock": true, 00:17:52.193 "num_base_bdevs": 2, 00:17:52.193 "num_base_bdevs_discovered": 1, 00:17:52.193 "num_base_bdevs_operational": 1, 00:17:52.193 "base_bdevs_list": [ 00:17:52.193 { 00:17:52.193 "name": null, 00:17:52.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.193 "is_configured": false, 00:17:52.193 "data_offset": 256, 00:17:52.193 "data_size": 7936 00:17:52.193 }, 00:17:52.193 { 00:17:52.193 "name": "pt2", 00:17:52.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.193 "is_configured": true, 00:17:52.193 "data_offset": 256, 00:17:52.193 "data_size": 7936 00:17:52.193 } 00:17:52.193 ] 00:17:52.193 }' 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.193 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.453 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 03:24:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 [2024-11-20 03:24:41.997065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.453 [2024-11-20 03:24:41.997093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.453 [2024-11-20 03:24:41.997136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.453 [2024-11-20 03:24:41.997172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.453 [2024-11-20 03:24:41.997179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 [2024-11-20 03:24:42.057002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.453 [2024-11-20 03:24:42.057050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.453 [2024-11-20 03:24:42.057069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:52.453 [2024-11-20 03:24:42.057077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.453 [2024-11-20 03:24:42.058922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.453 [2024-11-20 03:24:42.058956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.453 [2024-11-20 03:24:42.058996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.453 [2024-11-20 03:24:42.059062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.453 [2024-11-20 03:24:42.059140] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:52.453 [2024-11-20 03:24:42.059159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.453 [2024-11-20 03:24:42.059174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:52.453 [2024-11-20 03:24:42.059228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.453 [2024-11-20 03:24:42.059287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:52.453 [2024-11-20 03:24:42.059295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:52.453 [2024-11-20 03:24:42.059350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:52.453 [2024-11-20 03:24:42.059404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:52.453 [2024-11-20 03:24:42.059415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:52.453 [2024-11-20 03:24:42.059477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.453 pt1 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.453 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.454 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.714 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.714 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.714 "name": "raid_bdev1", 00:17:52.714 "uuid": "b03adf20-9cec-405a-8b16-97aeaddf4867", 00:17:52.714 "strip_size_kb": 0, 00:17:52.714 "state": "online", 00:17:52.714 "raid_level": "raid1", 00:17:52.714 "superblock": true, 00:17:52.714 "num_base_bdevs": 2, 00:17:52.714 "num_base_bdevs_discovered": 1, 00:17:52.714 "num_base_bdevs_operational": 1, 00:17:52.714 "base_bdevs_list": [ 00:17:52.714 { 00:17:52.714 "name": null, 00:17:52.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.714 "is_configured": false, 00:17:52.714 "data_offset": 256, 00:17:52.714 "data_size": 7936 00:17:52.714 }, 00:17:52.714 { 00:17:52.714 "name": "pt2", 00:17:52.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.714 "is_configured": true, 00:17:52.714 "data_offset": 256, 00:17:52.714 "data_size": 7936 00:17:52.714 } 00:17:52.714 ] 00:17:52.714 }' 00:17:52.714 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.714 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:52.973 [2024-11-20 03:24:42.572289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b03adf20-9cec-405a-8b16-97aeaddf4867 '!=' b03adf20-9cec-405a-8b16-97aeaddf4867 ']' 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88518 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88518 ']' 00:17:52.973 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88518 00:17:53.233 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:53.233 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.233 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88518 00:17:53.234 killing process with pid 88518 00:17:53.234 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.234 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.234 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88518' 00:17:53.234 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88518 00:17:53.234 [2024-11-20 03:24:42.633050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.234 [2024-11-20 03:24:42.633117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.234 [2024-11-20 03:24:42.633154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.234 [2024-11-20 03:24:42.633166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:53.234 03:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88518 00:17:53.234 [2024-11-20 03:24:42.826634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.617 03:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:54.617 00:17:54.617 real 0m5.843s 00:17:54.617 user 0m8.825s 00:17:54.617 sys 0m1.094s 00:17:54.617 03:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.617 ************************************ 00:17:54.617 END TEST raid_superblock_test_md_interleaved 00:17:54.617 ************************************ 00:17:54.617 03:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.617 03:24:43 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:54.617 03:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:54.617 03:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.617 03:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.617 ************************************ 00:17:54.617 START TEST raid_rebuild_test_sb_md_interleaved 00:17:54.617 ************************************ 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88841 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88841 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88841 ']' 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.617 03:24:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.617 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:54.617 Zero copy mechanism will not be used. 00:17:54.617 [2024-11-20 03:24:44.056143] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:54.617 [2024-11-20 03:24:44.056299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88841 ] 00:17:54.618 [2024-11-20 03:24:44.236206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.877 [2024-11-20 03:24:44.338642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.137 [2024-11-20 03:24:44.524266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.137 [2024-11-20 03:24:44.524295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 BaseBdev1_malloc 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 [2024-11-20 03:24:44.898649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:55.398 [2024-11-20 03:24:44.898709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.398 [2024-11-20 03:24:44.898730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:55.398 [2024-11-20 03:24:44.898744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.398 [2024-11-20 03:24:44.900507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.398 [2024-11-20 03:24:44.900545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:55.398 BaseBdev1 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 BaseBdev2_malloc 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 [2024-11-20 03:24:44.951712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:55.398 [2024-11-20 03:24:44.951771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.398 [2024-11-20 03:24:44.951790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:55.398 [2024-11-20 03:24:44.951803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.398 [2024-11-20 03:24:44.953552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.398 [2024-11-20 03:24:44.953586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:55.398 BaseBdev2 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 03:24:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 spare_malloc 00:17:55.398 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.398 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:55.398 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.658 spare_delay 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.658 [2024-11-20 03:24:45.045941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.658 [2024-11-20 03:24:45.045995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.658 [2024-11-20 03:24:45.046015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:55.658 [2024-11-20 03:24:45.046025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.658 [2024-11-20 03:24:45.047876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.658 [2024-11-20 03:24:45.047912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.658 spare 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.658 [2024-11-20 03:24:45.057959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.658 [2024-11-20 03:24:45.059751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.658 [2024-11-20 03:24:45.059931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:55.658 [2024-11-20 03:24:45.059977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:55.658 [2024-11-20 03:24:45.060066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:55.658 [2024-11-20 03:24:45.060155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:55.658 [2024-11-20 03:24:45.060168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:55.658 [2024-11-20 03:24:45.060231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.658 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.659 "name": "raid_bdev1", 00:17:55.659 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:55.659 "strip_size_kb": 0, 00:17:55.659 "state": "online", 00:17:55.659 "raid_level": "raid1", 00:17:55.659 "superblock": true, 00:17:55.659 "num_base_bdevs": 2, 00:17:55.659 "num_base_bdevs_discovered": 2, 00:17:55.659 "num_base_bdevs_operational": 2, 00:17:55.659 "base_bdevs_list": [ 00:17:55.659 { 00:17:55.659 "name": "BaseBdev1", 00:17:55.659 "uuid": "a5045ad5-bcbf-5bd8-bfa9-d39f8ae17ae9", 00:17:55.659 "is_configured": true, 00:17:55.659 "data_offset": 256, 00:17:55.659 "data_size": 7936 00:17:55.659 }, 00:17:55.659 { 00:17:55.659 "name": "BaseBdev2", 00:17:55.659 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:55.659 "is_configured": true, 00:17:55.659 "data_offset": 256, 00:17:55.659 "data_size": 7936 00:17:55.659 } 00:17:55.659 ] 00:17:55.659 }' 00:17:55.659 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.659 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.920 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.920 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.920 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:55.920 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.920 [2024-11-20 03:24:45.541372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.180 [2024-11-20 03:24:45.640941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.180 "name": "raid_bdev1", 00:17:56.180 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:56.180 "strip_size_kb": 0, 00:17:56.180 "state": "online", 00:17:56.180 "raid_level": "raid1", 00:17:56.180 "superblock": true, 00:17:56.180 "num_base_bdevs": 2, 00:17:56.180 "num_base_bdevs_discovered": 1, 00:17:56.180 "num_base_bdevs_operational": 1, 00:17:56.180 "base_bdevs_list": [ 00:17:56.180 { 00:17:56.180 "name": null, 00:17:56.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.180 "is_configured": false, 00:17:56.180 "data_offset": 0, 00:17:56.180 "data_size": 7936 00:17:56.180 }, 00:17:56.180 { 00:17:56.180 "name": "BaseBdev2", 00:17:56.180 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:56.180 "is_configured": true, 00:17:56.180 "data_offset": 256, 00:17:56.180 "data_size": 7936 00:17:56.180 } 00:17:56.180 ] 00:17:56.180 }' 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.180 03:24:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.748 03:24:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.748 03:24:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.748 03:24:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.748 [2024-11-20 03:24:46.092151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.748 [2024-11-20 03:24:46.109299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.748 03:24:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.748 03:24:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:56.748 [2024-11-20 03:24:46.111105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.687 "name": "raid_bdev1", 00:17:57.687 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:57.687 "strip_size_kb": 0, 00:17:57.687 "state": "online", 00:17:57.687 "raid_level": "raid1", 00:17:57.687 "superblock": true, 00:17:57.687 "num_base_bdevs": 2, 00:17:57.687 "num_base_bdevs_discovered": 2, 00:17:57.687 "num_base_bdevs_operational": 2, 00:17:57.687 "process": { 00:17:57.687 "type": "rebuild", 00:17:57.687 "target": "spare", 00:17:57.687 "progress": { 00:17:57.687 "blocks": 2560, 00:17:57.687 "percent": 32 00:17:57.687 } 00:17:57.687 }, 00:17:57.687 "base_bdevs_list": [ 00:17:57.687 { 00:17:57.687 "name": "spare", 00:17:57.687 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:17:57.687 "is_configured": true, 00:17:57.687 "data_offset": 256, 00:17:57.687 "data_size": 7936 00:17:57.687 }, 00:17:57.687 { 00:17:57.687 "name": "BaseBdev2", 00:17:57.687 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:57.687 "is_configured": true, 00:17:57.687 "data_offset": 256, 00:17:57.687 "data_size": 7936 00:17:57.687 } 00:17:57.687 ] 00:17:57.687 }' 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.687 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.687 [2024-11-20 03:24:47.274826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.687 [2024-11-20 03:24:47.315791] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.687 [2024-11-20 03:24:47.315847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.687 [2024-11-20 03:24:47.315862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.687 [2024-11-20 03:24:47.315873] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.947 "name": "raid_bdev1", 00:17:57.947 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:57.947 "strip_size_kb": 0, 00:17:57.947 "state": "online", 00:17:57.947 "raid_level": "raid1", 00:17:57.947 "superblock": true, 00:17:57.947 "num_base_bdevs": 2, 00:17:57.947 "num_base_bdevs_discovered": 1, 00:17:57.947 "num_base_bdevs_operational": 1, 00:17:57.947 "base_bdevs_list": [ 00:17:57.947 { 00:17:57.947 "name": null, 00:17:57.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.947 "is_configured": false, 00:17:57.947 "data_offset": 0, 00:17:57.947 "data_size": 7936 00:17:57.947 }, 00:17:57.947 { 00:17:57.947 "name": "BaseBdev2", 00:17:57.947 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:57.947 "is_configured": true, 00:17:57.947 "data_offset": 256, 00:17:57.947 "data_size": 7936 00:17:57.947 } 00:17:57.947 ] 00:17:57.947 }' 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.947 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.207 "name": "raid_bdev1", 00:17:58.207 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:58.207 "strip_size_kb": 0, 00:17:58.207 "state": "online", 00:17:58.207 "raid_level": "raid1", 00:17:58.207 "superblock": true, 00:17:58.207 "num_base_bdevs": 2, 00:17:58.207 "num_base_bdevs_discovered": 1, 00:17:58.207 "num_base_bdevs_operational": 1, 00:17:58.207 "base_bdevs_list": [ 00:17:58.207 { 00:17:58.207 "name": null, 00:17:58.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.207 "is_configured": false, 00:17:58.207 "data_offset": 0, 00:17:58.207 "data_size": 7936 00:17:58.207 }, 00:17:58.207 { 00:17:58.207 "name": "BaseBdev2", 00:17:58.207 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:58.207 "is_configured": true, 00:17:58.207 "data_offset": 256, 00:17:58.207 "data_size": 7936 00:17:58.207 } 00:17:58.207 ] 00:17:58.207 }' 00:17:58.207 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.467 [2024-11-20 03:24:47.916795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.467 [2024-11-20 03:24:47.932418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.467 03:24:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.467 [2024-11-20 03:24:47.934428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.406 "name": "raid_bdev1", 00:17:59.406 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:59.406 "strip_size_kb": 0, 00:17:59.406 "state": "online", 00:17:59.406 "raid_level": "raid1", 00:17:59.406 "superblock": true, 00:17:59.406 "num_base_bdevs": 2, 00:17:59.406 "num_base_bdevs_discovered": 2, 00:17:59.406 "num_base_bdevs_operational": 2, 00:17:59.406 "process": { 00:17:59.406 "type": "rebuild", 00:17:59.406 "target": "spare", 00:17:59.406 "progress": { 00:17:59.406 "blocks": 2560, 00:17:59.406 "percent": 32 00:17:59.406 } 00:17:59.406 }, 00:17:59.406 "base_bdevs_list": [ 00:17:59.406 { 00:17:59.406 "name": "spare", 00:17:59.406 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:17:59.406 "is_configured": true, 00:17:59.406 "data_offset": 256, 00:17:59.406 "data_size": 7936 00:17:59.406 }, 00:17:59.406 { 00:17:59.406 "name": "BaseBdev2", 00:17:59.406 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:59.406 "is_configured": true, 00:17:59.406 "data_offset": 256, 00:17:59.406 "data_size": 7936 00:17:59.406 } 00:17:59.406 ] 00:17:59.406 }' 00:17:59.406 03:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.406 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.665 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:59.666 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=733 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.666 "name": "raid_bdev1", 00:17:59.666 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:17:59.666 "strip_size_kb": 0, 00:17:59.666 "state": "online", 00:17:59.666 "raid_level": "raid1", 00:17:59.666 "superblock": true, 00:17:59.666 "num_base_bdevs": 2, 00:17:59.666 "num_base_bdevs_discovered": 2, 00:17:59.666 "num_base_bdevs_operational": 2, 00:17:59.666 "process": { 00:17:59.666 "type": "rebuild", 00:17:59.666 "target": "spare", 00:17:59.666 "progress": { 00:17:59.666 "blocks": 2816, 00:17:59.666 "percent": 35 00:17:59.666 } 00:17:59.666 }, 00:17:59.666 "base_bdevs_list": [ 00:17:59.666 { 00:17:59.666 "name": "spare", 00:17:59.666 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:17:59.666 "is_configured": true, 00:17:59.666 "data_offset": 256, 00:17:59.666 "data_size": 7936 00:17:59.666 }, 00:17:59.666 { 00:17:59.666 "name": "BaseBdev2", 00:17:59.666 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:17:59.666 "is_configured": true, 00:17:59.666 "data_offset": 256, 00:17:59.666 "data_size": 7936 00:17:59.666 } 00:17:59.666 ] 00:17:59.666 }' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.666 03:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.604 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.863 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.863 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.863 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.863 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.863 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.864 "name": "raid_bdev1", 00:18:00.864 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:00.864 "strip_size_kb": 0, 00:18:00.864 "state": "online", 00:18:00.864 "raid_level": "raid1", 00:18:00.864 "superblock": true, 00:18:00.864 "num_base_bdevs": 2, 00:18:00.864 "num_base_bdevs_discovered": 2, 00:18:00.864 "num_base_bdevs_operational": 2, 00:18:00.864 "process": { 00:18:00.864 "type": "rebuild", 00:18:00.864 "target": "spare", 00:18:00.864 "progress": { 00:18:00.864 "blocks": 5888, 00:18:00.864 "percent": 74 00:18:00.864 } 00:18:00.864 }, 00:18:00.864 "base_bdevs_list": [ 00:18:00.864 { 00:18:00.864 "name": "spare", 00:18:00.864 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:00.864 "is_configured": true, 00:18:00.864 "data_offset": 256, 00:18:00.864 "data_size": 7936 00:18:00.864 }, 00:18:00.864 { 00:18:00.864 "name": "BaseBdev2", 00:18:00.864 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:00.864 "is_configured": true, 00:18:00.864 "data_offset": 256, 00:18:00.864 "data_size": 7936 00:18:00.864 } 00:18:00.864 ] 00:18:00.864 }' 00:18:00.864 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.864 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.864 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.864 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.864 03:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.433 [2024-11-20 03:24:51.046394] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.433 [2024-11-20 03:24:51.046477] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.433 [2024-11-20 03:24:51.046567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.003 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.003 "name": "raid_bdev1", 00:18:02.003 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:02.003 "strip_size_kb": 0, 00:18:02.003 "state": "online", 00:18:02.003 "raid_level": "raid1", 00:18:02.004 "superblock": true, 00:18:02.004 "num_base_bdevs": 2, 00:18:02.004 "num_base_bdevs_discovered": 2, 00:18:02.004 "num_base_bdevs_operational": 2, 00:18:02.004 "base_bdevs_list": [ 00:18:02.004 { 00:18:02.004 "name": "spare", 00:18:02.004 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:02.004 "is_configured": true, 00:18:02.004 "data_offset": 256, 00:18:02.004 "data_size": 7936 00:18:02.004 }, 00:18:02.004 { 00:18:02.004 "name": "BaseBdev2", 00:18:02.004 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:02.004 "is_configured": true, 00:18:02.004 "data_offset": 256, 00:18:02.004 "data_size": 7936 00:18:02.004 } 00:18:02.004 ] 00:18:02.004 }' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.004 "name": "raid_bdev1", 00:18:02.004 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:02.004 "strip_size_kb": 0, 00:18:02.004 "state": "online", 00:18:02.004 "raid_level": "raid1", 00:18:02.004 "superblock": true, 00:18:02.004 "num_base_bdevs": 2, 00:18:02.004 "num_base_bdevs_discovered": 2, 00:18:02.004 "num_base_bdevs_operational": 2, 00:18:02.004 "base_bdevs_list": [ 00:18:02.004 { 00:18:02.004 "name": "spare", 00:18:02.004 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:02.004 "is_configured": true, 00:18:02.004 "data_offset": 256, 00:18:02.004 "data_size": 7936 00:18:02.004 }, 00:18:02.004 { 00:18:02.004 "name": "BaseBdev2", 00:18:02.004 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:02.004 "is_configured": true, 00:18:02.004 "data_offset": 256, 00:18:02.004 "data_size": 7936 00:18:02.004 } 00:18:02.004 ] 00:18:02.004 }' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.004 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.264 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.264 "name": "raid_bdev1", 00:18:02.264 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:02.264 "strip_size_kb": 0, 00:18:02.264 "state": "online", 00:18:02.264 "raid_level": "raid1", 00:18:02.264 "superblock": true, 00:18:02.264 "num_base_bdevs": 2, 00:18:02.264 "num_base_bdevs_discovered": 2, 00:18:02.264 "num_base_bdevs_operational": 2, 00:18:02.264 "base_bdevs_list": [ 00:18:02.264 { 00:18:02.264 "name": "spare", 00:18:02.264 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:02.264 "is_configured": true, 00:18:02.264 "data_offset": 256, 00:18:02.264 "data_size": 7936 00:18:02.264 }, 00:18:02.264 { 00:18:02.264 "name": "BaseBdev2", 00:18:02.264 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:02.264 "is_configured": true, 00:18:02.264 "data_offset": 256, 00:18:02.264 "data_size": 7936 00:18:02.264 } 00:18:02.264 ] 00:18:02.264 }' 00:18:02.264 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.264 03:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 [2024-11-20 03:24:52.022978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.525 [2024-11-20 03:24:52.023012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.525 [2024-11-20 03:24:52.023102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.525 [2024-11-20 03:24:52.023166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.525 [2024-11-20 03:24:52.023179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 [2024-11-20 03:24:52.094849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.525 [2024-11-20 03:24:52.094901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.525 [2024-11-20 03:24:52.094922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:02.525 [2024-11-20 03:24:52.094932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.525 [2024-11-20 03:24:52.096818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.525 [2024-11-20 03:24:52.096851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.525 [2024-11-20 03:24:52.096899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:02.525 [2024-11-20 03:24:52.096956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.525 [2024-11-20 03:24:52.097070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.525 spare 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.525 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.785 [2024-11-20 03:24:52.196962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:02.785 [2024-11-20 03:24:52.196991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:02.785 [2024-11-20 03:24:52.197072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:02.785 [2024-11-20 03:24:52.197153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:02.785 [2024-11-20 03:24:52.197162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:02.785 [2024-11-20 03:24:52.197229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.785 "name": "raid_bdev1", 00:18:02.785 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:02.785 "strip_size_kb": 0, 00:18:02.785 "state": "online", 00:18:02.785 "raid_level": "raid1", 00:18:02.785 "superblock": true, 00:18:02.785 "num_base_bdevs": 2, 00:18:02.785 "num_base_bdevs_discovered": 2, 00:18:02.785 "num_base_bdevs_operational": 2, 00:18:02.785 "base_bdevs_list": [ 00:18:02.785 { 00:18:02.785 "name": "spare", 00:18:02.785 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:02.785 "is_configured": true, 00:18:02.785 "data_offset": 256, 00:18:02.785 "data_size": 7936 00:18:02.785 }, 00:18:02.785 { 00:18:02.785 "name": "BaseBdev2", 00:18:02.785 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:02.785 "is_configured": true, 00:18:02.785 "data_offset": 256, 00:18:02.785 "data_size": 7936 00:18:02.785 } 00:18:02.785 ] 00:18:02.785 }' 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.785 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.045 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.305 "name": "raid_bdev1", 00:18:03.305 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:03.305 "strip_size_kb": 0, 00:18:03.305 "state": "online", 00:18:03.305 "raid_level": "raid1", 00:18:03.305 "superblock": true, 00:18:03.305 "num_base_bdevs": 2, 00:18:03.305 "num_base_bdevs_discovered": 2, 00:18:03.305 "num_base_bdevs_operational": 2, 00:18:03.305 "base_bdevs_list": [ 00:18:03.305 { 00:18:03.305 "name": "spare", 00:18:03.305 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:03.305 "is_configured": true, 00:18:03.305 "data_offset": 256, 00:18:03.305 "data_size": 7936 00:18:03.305 }, 00:18:03.305 { 00:18:03.305 "name": "BaseBdev2", 00:18:03.305 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:03.305 "is_configured": true, 00:18:03.305 "data_offset": 256, 00:18:03.305 "data_size": 7936 00:18:03.305 } 00:18:03.305 ] 00:18:03.305 }' 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.305 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.306 [2024-11-20 03:24:52.809714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.306 "name": "raid_bdev1", 00:18:03.306 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:03.306 "strip_size_kb": 0, 00:18:03.306 "state": "online", 00:18:03.306 "raid_level": "raid1", 00:18:03.306 "superblock": true, 00:18:03.306 "num_base_bdevs": 2, 00:18:03.306 "num_base_bdevs_discovered": 1, 00:18:03.306 "num_base_bdevs_operational": 1, 00:18:03.306 "base_bdevs_list": [ 00:18:03.306 { 00:18:03.306 "name": null, 00:18:03.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.306 "is_configured": false, 00:18:03.306 "data_offset": 0, 00:18:03.306 "data_size": 7936 00:18:03.306 }, 00:18:03.306 { 00:18:03.306 "name": "BaseBdev2", 00:18:03.306 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:03.306 "is_configured": true, 00:18:03.306 "data_offset": 256, 00:18:03.306 "data_size": 7936 00:18:03.306 } 00:18:03.306 ] 00:18:03.306 }' 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.306 03:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 03:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.876 03:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.876 03:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 [2024-11-20 03:24:53.268920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.876 [2024-11-20 03:24:53.269053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.876 [2024-11-20 03:24:53.269068] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:03.876 [2024-11-20 03:24:53.269101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.876 [2024-11-20 03:24:53.284789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:03.876 03:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.876 03:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:03.876 [2024-11-20 03:24:53.286589] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.815 "name": "raid_bdev1", 00:18:04.815 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:04.815 "strip_size_kb": 0, 00:18:04.815 "state": "online", 00:18:04.815 "raid_level": "raid1", 00:18:04.815 "superblock": true, 00:18:04.815 "num_base_bdevs": 2, 00:18:04.815 "num_base_bdevs_discovered": 2, 00:18:04.815 "num_base_bdevs_operational": 2, 00:18:04.815 "process": { 00:18:04.815 "type": "rebuild", 00:18:04.815 "target": "spare", 00:18:04.815 "progress": { 00:18:04.815 "blocks": 2560, 00:18:04.815 "percent": 32 00:18:04.815 } 00:18:04.815 }, 00:18:04.815 "base_bdevs_list": [ 00:18:04.815 { 00:18:04.815 "name": "spare", 00:18:04.815 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:04.815 "is_configured": true, 00:18:04.815 "data_offset": 256, 00:18:04.815 "data_size": 7936 00:18:04.815 }, 00:18:04.815 { 00:18:04.815 "name": "BaseBdev2", 00:18:04.815 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:04.815 "is_configured": true, 00:18:04.815 "data_offset": 256, 00:18:04.815 "data_size": 7936 00:18:04.815 } 00:18:04.815 ] 00:18:04.815 }' 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.815 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.815 [2024-11-20 03:24:54.434518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.075 [2024-11-20 03:24:54.491244] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.075 [2024-11-20 03:24:54.491353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.075 [2024-11-20 03:24:54.491387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.075 [2024-11-20 03:24:54.491409] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.075 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.075 "name": "raid_bdev1", 00:18:05.075 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:05.075 "strip_size_kb": 0, 00:18:05.075 "state": "online", 00:18:05.075 "raid_level": "raid1", 00:18:05.075 "superblock": true, 00:18:05.075 "num_base_bdevs": 2, 00:18:05.075 "num_base_bdevs_discovered": 1, 00:18:05.075 "num_base_bdevs_operational": 1, 00:18:05.075 "base_bdevs_list": [ 00:18:05.075 { 00:18:05.075 "name": null, 00:18:05.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.075 "is_configured": false, 00:18:05.075 "data_offset": 0, 00:18:05.075 "data_size": 7936 00:18:05.075 }, 00:18:05.075 { 00:18:05.075 "name": "BaseBdev2", 00:18:05.075 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:05.075 "is_configured": true, 00:18:05.075 "data_offset": 256, 00:18:05.075 "data_size": 7936 00:18:05.075 } 00:18:05.075 ] 00:18:05.075 }' 00:18:05.076 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.076 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.336 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.336 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.336 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.336 [2024-11-20 03:24:54.947471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.336 [2024-11-20 03:24:54.947527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.336 [2024-11-20 03:24:54.947552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:05.336 [2024-11-20 03:24:54.947564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.336 [2024-11-20 03:24:54.947780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.336 [2024-11-20 03:24:54.947799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.336 [2024-11-20 03:24:54.947845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.336 [2024-11-20 03:24:54.947858] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.336 [2024-11-20 03:24:54.947866] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:05.336 [2024-11-20 03:24:54.947892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.336 [2024-11-20 03:24:54.962880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:05.336 spare 00:18:05.336 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.336 03:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:05.336 [2024-11-20 03:24:54.964737] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.791 03:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.791 "name": "raid_bdev1", 00:18:06.791 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:06.791 "strip_size_kb": 0, 00:18:06.791 "state": "online", 00:18:06.791 "raid_level": "raid1", 00:18:06.791 "superblock": true, 00:18:06.791 "num_base_bdevs": 2, 00:18:06.791 "num_base_bdevs_discovered": 2, 00:18:06.791 "num_base_bdevs_operational": 2, 00:18:06.791 "process": { 00:18:06.791 "type": "rebuild", 00:18:06.791 "target": "spare", 00:18:06.791 "progress": { 00:18:06.791 "blocks": 2560, 00:18:06.791 "percent": 32 00:18:06.791 } 00:18:06.791 }, 00:18:06.791 "base_bdevs_list": [ 00:18:06.791 { 00:18:06.791 "name": "spare", 00:18:06.791 "uuid": "64127836-3212-55cf-885a-92cd0c7e6a43", 00:18:06.791 "is_configured": true, 00:18:06.791 "data_offset": 256, 00:18:06.791 "data_size": 7936 00:18:06.791 }, 00:18:06.791 { 00:18:06.791 "name": "BaseBdev2", 00:18:06.791 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:06.791 "is_configured": true, 00:18:06.791 "data_offset": 256, 00:18:06.791 "data_size": 7936 00:18:06.791 } 00:18:06.791 ] 00:18:06.791 }' 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.791 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.791 [2024-11-20 03:24:56.128496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.791 [2024-11-20 03:24:56.169459] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.791 [2024-11-20 03:24:56.169518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.792 [2024-11-20 03:24:56.169537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.792 [2024-11-20 03:24:56.169545] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.792 "name": "raid_bdev1", 00:18:06.792 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:06.792 "strip_size_kb": 0, 00:18:06.792 "state": "online", 00:18:06.792 "raid_level": "raid1", 00:18:06.792 "superblock": true, 00:18:06.792 "num_base_bdevs": 2, 00:18:06.792 "num_base_bdevs_discovered": 1, 00:18:06.792 "num_base_bdevs_operational": 1, 00:18:06.792 "base_bdevs_list": [ 00:18:06.792 { 00:18:06.792 "name": null, 00:18:06.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.792 "is_configured": false, 00:18:06.792 "data_offset": 0, 00:18:06.792 "data_size": 7936 00:18:06.792 }, 00:18:06.792 { 00:18:06.792 "name": "BaseBdev2", 00:18:06.792 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:06.792 "is_configured": true, 00:18:06.792 "data_offset": 256, 00:18:06.792 "data_size": 7936 00:18:06.792 } 00:18:06.792 ] 00:18:06.792 }' 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.792 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.052 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.312 "name": "raid_bdev1", 00:18:07.312 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:07.312 "strip_size_kb": 0, 00:18:07.312 "state": "online", 00:18:07.312 "raid_level": "raid1", 00:18:07.312 "superblock": true, 00:18:07.312 "num_base_bdevs": 2, 00:18:07.312 "num_base_bdevs_discovered": 1, 00:18:07.312 "num_base_bdevs_operational": 1, 00:18:07.312 "base_bdevs_list": [ 00:18:07.312 { 00:18:07.312 "name": null, 00:18:07.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.312 "is_configured": false, 00:18:07.312 "data_offset": 0, 00:18:07.312 "data_size": 7936 00:18:07.312 }, 00:18:07.312 { 00:18:07.312 "name": "BaseBdev2", 00:18:07.312 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:07.312 "is_configured": true, 00:18:07.312 "data_offset": 256, 00:18:07.312 "data_size": 7936 00:18:07.312 } 00:18:07.312 ] 00:18:07.312 }' 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.312 [2024-11-20 03:24:56.806215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.312 [2024-11-20 03:24:56.806327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.312 [2024-11-20 03:24:56.806353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:07.312 [2024-11-20 03:24:56.806363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.312 [2024-11-20 03:24:56.806535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.312 [2024-11-20 03:24:56.806547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.312 [2024-11-20 03:24:56.806595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:07.312 [2024-11-20 03:24:56.806607] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.312 [2024-11-20 03:24:56.806615] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:07.312 [2024-11-20 03:24:56.806635] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:07.312 BaseBdev1 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.312 03:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.252 "name": "raid_bdev1", 00:18:08.252 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:08.252 "strip_size_kb": 0, 00:18:08.252 "state": "online", 00:18:08.252 "raid_level": "raid1", 00:18:08.252 "superblock": true, 00:18:08.252 "num_base_bdevs": 2, 00:18:08.252 "num_base_bdevs_discovered": 1, 00:18:08.252 "num_base_bdevs_operational": 1, 00:18:08.252 "base_bdevs_list": [ 00:18:08.252 { 00:18:08.252 "name": null, 00:18:08.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.252 "is_configured": false, 00:18:08.252 "data_offset": 0, 00:18:08.252 "data_size": 7936 00:18:08.252 }, 00:18:08.252 { 00:18:08.252 "name": "BaseBdev2", 00:18:08.252 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:08.252 "is_configured": true, 00:18:08.252 "data_offset": 256, 00:18:08.252 "data_size": 7936 00:18:08.252 } 00:18:08.252 ] 00:18:08.252 }' 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.252 03:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.822 "name": "raid_bdev1", 00:18:08.822 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:08.822 "strip_size_kb": 0, 00:18:08.822 "state": "online", 00:18:08.822 "raid_level": "raid1", 00:18:08.822 "superblock": true, 00:18:08.822 "num_base_bdevs": 2, 00:18:08.822 "num_base_bdevs_discovered": 1, 00:18:08.822 "num_base_bdevs_operational": 1, 00:18:08.822 "base_bdevs_list": [ 00:18:08.822 { 00:18:08.822 "name": null, 00:18:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.822 "is_configured": false, 00:18:08.822 "data_offset": 0, 00:18:08.822 "data_size": 7936 00:18:08.822 }, 00:18:08.822 { 00:18:08.822 "name": "BaseBdev2", 00:18:08.822 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:08.822 "is_configured": true, 00:18:08.822 "data_offset": 256, 00:18:08.822 "data_size": 7936 00:18:08.822 } 00:18:08.822 ] 00:18:08.822 }' 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.822 [2024-11-20 03:24:58.395484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.822 [2024-11-20 03:24:58.395635] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.822 [2024-11-20 03:24:58.395652] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.822 request: 00:18:08.822 { 00:18:08.822 "base_bdev": "BaseBdev1", 00:18:08.822 "raid_bdev": "raid_bdev1", 00:18:08.822 "method": "bdev_raid_add_base_bdev", 00:18:08.822 "req_id": 1 00:18:08.822 } 00:18:08.822 Got JSON-RPC error response 00:18:08.822 response: 00:18:08.822 { 00:18:08.822 "code": -22, 00:18:08.822 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:08.822 } 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.822 03:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.779 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.038 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.038 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.038 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.038 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.038 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.038 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.038 "name": "raid_bdev1", 00:18:10.038 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:10.038 "strip_size_kb": 0, 00:18:10.038 "state": "online", 00:18:10.038 "raid_level": "raid1", 00:18:10.038 "superblock": true, 00:18:10.038 "num_base_bdevs": 2, 00:18:10.039 "num_base_bdevs_discovered": 1, 00:18:10.039 "num_base_bdevs_operational": 1, 00:18:10.039 "base_bdevs_list": [ 00:18:10.039 { 00:18:10.039 "name": null, 00:18:10.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.039 "is_configured": false, 00:18:10.039 "data_offset": 0, 00:18:10.039 "data_size": 7936 00:18:10.039 }, 00:18:10.039 { 00:18:10.039 "name": "BaseBdev2", 00:18:10.039 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:10.039 "is_configured": true, 00:18:10.039 "data_offset": 256, 00:18:10.039 "data_size": 7936 00:18:10.039 } 00:18:10.039 ] 00:18:10.039 }' 00:18:10.039 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.039 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.298 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.298 "name": "raid_bdev1", 00:18:10.298 "uuid": "bf2e97ed-746e-4006-a336-385cc8719b5a", 00:18:10.298 "strip_size_kb": 0, 00:18:10.298 "state": "online", 00:18:10.298 "raid_level": "raid1", 00:18:10.298 "superblock": true, 00:18:10.298 "num_base_bdevs": 2, 00:18:10.298 "num_base_bdevs_discovered": 1, 00:18:10.298 "num_base_bdevs_operational": 1, 00:18:10.298 "base_bdevs_list": [ 00:18:10.298 { 00:18:10.298 "name": null, 00:18:10.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.298 "is_configured": false, 00:18:10.298 "data_offset": 0, 00:18:10.298 "data_size": 7936 00:18:10.298 }, 00:18:10.298 { 00:18:10.298 "name": "BaseBdev2", 00:18:10.298 "uuid": "bd66e69c-3cf4-5982-bdd1-1890aa75bc7f", 00:18:10.298 "is_configured": true, 00:18:10.298 "data_offset": 256, 00:18:10.298 "data_size": 7936 00:18:10.298 } 00:18:10.298 ] 00:18:10.298 }' 00:18:10.559 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.559 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.559 03:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88841 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88841 ']' 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88841 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88841 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88841' 00:18:10.559 killing process with pid 88841 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88841 00:18:10.559 Received shutdown signal, test time was about 60.000000 seconds 00:18:10.559 00:18:10.559 Latency(us) 00:18:10.559 [2024-11-20T03:25:00.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.559 [2024-11-20T03:25:00.194Z] =================================================================================================================== 00:18:10.559 [2024-11-20T03:25:00.194Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.559 [2024-11-20 03:25:00.074849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.559 [2024-11-20 03:25:00.074956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.559 [2024-11-20 03:25:00.074997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.559 [2024-11-20 03:25:00.075007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:10.559 03:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88841 00:18:10.819 [2024-11-20 03:25:00.354122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.760 03:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:11.760 00:18:11.760 real 0m17.431s 00:18:11.760 user 0m22.863s 00:18:11.760 sys 0m1.745s 00:18:11.760 03:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.760 ************************************ 00:18:11.760 END TEST raid_rebuild_test_sb_md_interleaved 00:18:11.760 ************************************ 00:18:11.760 03:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.020 03:25:01 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:12.020 03:25:01 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:12.020 03:25:01 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88841 ']' 00:18:12.020 03:25:01 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88841 00:18:12.020 03:25:01 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:12.020 ************************************ 00:18:12.020 END TEST bdev_raid 00:18:12.020 ************************************ 00:18:12.020 00:18:12.020 real 11m54.595s 00:18:12.020 user 16m9.805s 00:18:12.020 sys 1m50.145s 00:18:12.020 03:25:01 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.020 03:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.020 03:25:01 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:12.020 03:25:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:12.020 03:25:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.020 03:25:01 -- common/autotest_common.sh@10 -- # set +x 00:18:12.020 ************************************ 00:18:12.020 START TEST spdkcli_raid 00:18:12.020 ************************************ 00:18:12.020 03:25:01 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:12.280 * Looking for test storage... 00:18:12.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.281 03:25:01 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:12.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.281 --rc genhtml_branch_coverage=1 00:18:12.281 --rc genhtml_function_coverage=1 00:18:12.281 --rc genhtml_legend=1 00:18:12.281 --rc geninfo_all_blocks=1 00:18:12.281 --rc geninfo_unexecuted_blocks=1 00:18:12.281 00:18:12.281 ' 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:12.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.281 --rc genhtml_branch_coverage=1 00:18:12.281 --rc genhtml_function_coverage=1 00:18:12.281 --rc genhtml_legend=1 00:18:12.281 --rc geninfo_all_blocks=1 00:18:12.281 --rc geninfo_unexecuted_blocks=1 00:18:12.281 00:18:12.281 ' 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:12.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.281 --rc genhtml_branch_coverage=1 00:18:12.281 --rc genhtml_function_coverage=1 00:18:12.281 --rc genhtml_legend=1 00:18:12.281 --rc geninfo_all_blocks=1 00:18:12.281 --rc geninfo_unexecuted_blocks=1 00:18:12.281 00:18:12.281 ' 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:12.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.281 --rc genhtml_branch_coverage=1 00:18:12.281 --rc genhtml_function_coverage=1 00:18:12.281 --rc genhtml_legend=1 00:18:12.281 --rc geninfo_all_blocks=1 00:18:12.281 --rc geninfo_unexecuted_blocks=1 00:18:12.281 00:18:12.281 ' 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:12.281 03:25:01 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89517 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:12.281 03:25:01 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89517 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89517 ']' 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.281 03:25:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.281 [2024-11-20 03:25:01.908077] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:12.281 [2024-11-20 03:25:01.908290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89517 ] 00:18:12.542 [2024-11-20 03:25:02.082578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:12.802 [2024-11-20 03:25:02.197475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.802 [2024-11-20 03:25:02.197504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.743 03:25:03 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.743 03:25:03 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:13.743 03:25:03 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:13.743 03:25:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.743 03:25:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.743 03:25:03 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:13.743 03:25:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.743 03:25:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.743 03:25:03 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:13.743 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:13.743 ' 00:18:15.126 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:15.126 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:15.126 03:25:04 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:15.126 03:25:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.126 03:25:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.386 03:25:04 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:15.386 03:25:04 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.386 03:25:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.386 03:25:04 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:15.386 ' 00:18:16.327 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:16.327 03:25:05 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:16.327 03:25:05 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.327 03:25:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.587 03:25:06 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:16.587 03:25:06 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.587 03:25:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.587 03:25:06 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:16.587 03:25:06 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:17.157 03:25:06 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:17.157 03:25:06 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:17.157 03:25:06 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:17.157 03:25:06 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.157 03:25:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.157 03:25:06 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:17.157 03:25:06 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.157 03:25:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.157 03:25:06 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:17.157 ' 00:18:18.096 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:18.096 03:25:07 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:18.096 03:25:07 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.096 03:25:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.096 03:25:07 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:18.096 03:25:07 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.096 03:25:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.355 03:25:07 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:18.355 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:18.355 ' 00:18:19.735 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:19.735 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:19.735 03:25:09 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.735 03:25:09 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89517 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89517 ']' 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89517 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89517 00:18:19.735 killing process with pid 89517 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89517' 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89517 00:18:19.735 03:25:09 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89517 00:18:22.275 Process with pid 89517 is not found 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89517 ']' 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89517 00:18:22.275 03:25:11 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89517 ']' 00:18:22.275 03:25:11 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89517 00:18:22.275 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89517) - No such process 00:18:22.275 03:25:11 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89517 is not found' 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:22.275 03:25:11 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:22.275 ************************************ 00:18:22.275 END TEST spdkcli_raid 00:18:22.275 ************************************ 00:18:22.275 00:18:22.275 real 0m9.999s 00:18:22.275 user 0m20.579s 00:18:22.275 sys 0m1.174s 00:18:22.275 03:25:11 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.275 03:25:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.275 03:25:11 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:22.275 03:25:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:22.275 03:25:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.275 03:25:11 -- common/autotest_common.sh@10 -- # set +x 00:18:22.275 ************************************ 00:18:22.275 START TEST blockdev_raid5f 00:18:22.275 ************************************ 00:18:22.275 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:22.275 * Looking for test storage... 00:18:22.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:22.275 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:22.275 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:22.275 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:22.275 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:22.275 03:25:11 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.276 03:25:11 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.276 --rc genhtml_branch_coverage=1 00:18:22.276 --rc genhtml_function_coverage=1 00:18:22.276 --rc genhtml_legend=1 00:18:22.276 --rc geninfo_all_blocks=1 00:18:22.276 --rc geninfo_unexecuted_blocks=1 00:18:22.276 00:18:22.276 ' 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.276 --rc genhtml_branch_coverage=1 00:18:22.276 --rc genhtml_function_coverage=1 00:18:22.276 --rc genhtml_legend=1 00:18:22.276 --rc geninfo_all_blocks=1 00:18:22.276 --rc geninfo_unexecuted_blocks=1 00:18:22.276 00:18:22.276 ' 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.276 --rc genhtml_branch_coverage=1 00:18:22.276 --rc genhtml_function_coverage=1 00:18:22.276 --rc genhtml_legend=1 00:18:22.276 --rc geninfo_all_blocks=1 00:18:22.276 --rc geninfo_unexecuted_blocks=1 00:18:22.276 00:18:22.276 ' 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.276 --rc genhtml_branch_coverage=1 00:18:22.276 --rc genhtml_function_coverage=1 00:18:22.276 --rc genhtml_legend=1 00:18:22.276 --rc geninfo_all_blocks=1 00:18:22.276 --rc geninfo_unexecuted_blocks=1 00:18:22.276 00:18:22.276 ' 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89798 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:22.276 03:25:11 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89798 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89798 ']' 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.276 03:25:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:22.536 [2024-11-20 03:25:11.975271] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:22.536 [2024-11-20 03:25:11.975434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89798 ] 00:18:22.536 [2024-11-20 03:25:12.147224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.797 [2024-11-20 03:25:12.252380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 Malloc0 00:18:23.737 Malloc1 00:18:23.737 Malloc2 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7e3c372a-1eec-4d45-92d1-f9729fa17010"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7e3c372a-1eec-4d45-92d1-f9729fa17010",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7e3c372a-1eec-4d45-92d1-f9729fa17010",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "131cc8aa-4e7a-4bdc-a6d6-bcaa64d089c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "843b42e2-3eac-438b-ac60-d87071b9326f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f7389702-eb5b-45c3-bbb4-26e683bc9eb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:23.737 03:25:13 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89798 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89798 ']' 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89798 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.737 03:25:13 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89798 00:18:23.998 killing process with pid 89798 00:18:23.998 03:25:13 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.998 03:25:13 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.998 03:25:13 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89798' 00:18:23.998 03:25:13 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89798 00:18:23.998 03:25:13 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89798 00:18:26.538 03:25:15 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:26.538 03:25:15 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:26.538 03:25:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:26.538 03:25:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.538 03:25:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:26.538 ************************************ 00:18:26.538 START TEST bdev_hello_world 00:18:26.538 ************************************ 00:18:26.538 03:25:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:26.538 [2024-11-20 03:25:15.963433] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:26.538 [2024-11-20 03:25:15.963546] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89864 ] 00:18:26.538 [2024-11-20 03:25:16.136648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.798 [2024-11-20 03:25:16.241316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.389 [2024-11-20 03:25:16.748177] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:27.389 [2024-11-20 03:25:16.748228] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:27.389 [2024-11-20 03:25:16.748243] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:27.389 [2024-11-20 03:25:16.748711] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:27.389 [2024-11-20 03:25:16.748833] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:27.389 [2024-11-20 03:25:16.748848] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:27.389 [2024-11-20 03:25:16.748889] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:27.389 00:18:27.390 [2024-11-20 03:25:16.748906] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:28.771 00:18:28.771 real 0m2.147s 00:18:28.771 user 0m1.800s 00:18:28.771 sys 0m0.227s 00:18:28.771 03:25:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.771 ************************************ 00:18:28.771 END TEST bdev_hello_world 00:18:28.771 ************************************ 00:18:28.771 03:25:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:28.771 03:25:18 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:28.771 03:25:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:28.771 03:25:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.771 03:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.771 ************************************ 00:18:28.771 START TEST bdev_bounds 00:18:28.771 ************************************ 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89906 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89906' 00:18:28.771 Process bdevio pid: 89906 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89906 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89906 ']' 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.771 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:28.771 [2024-11-20 03:25:18.178240] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:28.771 [2024-11-20 03:25:18.178357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89906 ] 00:18:28.771 [2024-11-20 03:25:18.351922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:29.031 [2024-11-20 03:25:18.460977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.031 [2024-11-20 03:25:18.461133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.031 [2024-11-20 03:25:18.461169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.601 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.601 03:25:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:29.601 03:25:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:29.601 I/O targets: 00:18:29.601 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:29.601 00:18:29.601 00:18:29.601 CUnit - A unit testing framework for C - Version 2.1-3 00:18:29.601 http://cunit.sourceforge.net/ 00:18:29.601 00:18:29.601 00:18:29.601 Suite: bdevio tests on: raid5f 00:18:29.601 Test: blockdev write read block ...passed 00:18:29.601 Test: blockdev write zeroes read block ...passed 00:18:29.601 Test: blockdev write zeroes read no split ...passed 00:18:29.601 Test: blockdev write zeroes read split ...passed 00:18:29.860 Test: blockdev write zeroes read split partial ...passed 00:18:29.860 Test: blockdev reset ...passed 00:18:29.860 Test: blockdev write read 8 blocks ...passed 00:18:29.860 Test: blockdev write read size > 128k ...passed 00:18:29.860 Test: blockdev write read invalid size ...passed 00:18:29.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.860 Test: blockdev write read max offset ...passed 00:18:29.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.860 Test: blockdev writev readv 8 blocks ...passed 00:18:29.860 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.860 Test: blockdev writev readv block ...passed 00:18:29.860 Test: blockdev writev readv size > 128k ...passed 00:18:29.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.860 Test: blockdev comparev and writev ...passed 00:18:29.860 Test: blockdev nvme passthru rw ...passed 00:18:29.860 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.860 Test: blockdev nvme admin passthru ...passed 00:18:29.860 Test: blockdev copy ...passed 00:18:29.860 00:18:29.860 Run Summary: Type Total Ran Passed Failed Inactive 00:18:29.860 suites 1 1 n/a 0 0 00:18:29.860 tests 23 23 23 0 0 00:18:29.860 asserts 130 130 130 0 n/a 00:18:29.860 00:18:29.860 Elapsed time = 0.635 seconds 00:18:29.860 0 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89906 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89906 ']' 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89906 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89906 00:18:29.860 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.861 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.861 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89906' 00:18:29.861 killing process with pid 89906 00:18:29.861 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89906 00:18:29.861 03:25:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89906 00:18:31.242 03:25:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:31.242 ************************************ 00:18:31.242 END TEST bdev_bounds 00:18:31.242 ************************************ 00:18:31.242 00:18:31.242 real 0m2.643s 00:18:31.242 user 0m6.553s 00:18:31.242 sys 0m0.375s 00:18:31.242 03:25:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.242 03:25:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:31.242 03:25:20 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:31.242 03:25:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:31.242 03:25:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.242 03:25:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.242 ************************************ 00:18:31.242 START TEST bdev_nbd 00:18:31.242 ************************************ 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89966 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89966 /var/tmp/spdk-nbd.sock 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89966 ']' 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:31.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.242 03:25:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:31.502 [2024-11-20 03:25:20.917166] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:31.502 [2024-11-20 03:25:20.917365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.502 [2024-11-20 03:25:21.098038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.762 [2024-11-20 03:25:21.209248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:32.333 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.594 03:25:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.594 1+0 records in 00:18:32.594 1+0 records out 00:18:32.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396527 s, 10.3 MB/s 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:32.594 { 00:18:32.594 "nbd_device": "/dev/nbd0", 00:18:32.594 "bdev_name": "raid5f" 00:18:32.594 } 00:18:32.594 ]' 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:32.594 { 00:18:32.594 "nbd_device": "/dev/nbd0", 00:18:32.594 "bdev_name": "raid5f" 00:18:32.594 } 00:18:32.594 ]' 00:18:32.594 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.854 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:33.114 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:33.115 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:33.115 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:33.115 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:33.115 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:33.115 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:33.374 /dev/nbd0 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.374 1+0 records in 00:18:33.374 1+0 records out 00:18:33.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483779 s, 8.5 MB/s 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:33.374 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:33.375 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.375 03:25:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:33.634 { 00:18:33.634 "nbd_device": "/dev/nbd0", 00:18:33.634 "bdev_name": "raid5f" 00:18:33.634 } 00:18:33.634 ]' 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:33.634 { 00:18:33.634 "nbd_device": "/dev/nbd0", 00:18:33.634 "bdev_name": "raid5f" 00:18:33.634 } 00:18:33.634 ]' 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:33.634 256+0 records in 00:18:33.634 256+0 records out 00:18:33.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142442 s, 73.6 MB/s 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:33.634 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:33.894 256+0 records in 00:18:33.894 256+0 records out 00:18:33.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028087 s, 37.3 MB/s 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:33.894 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:33.895 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.154 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:34.154 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.154 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:34.154 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:34.154 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:34.154 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:34.414 03:25:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:34.414 malloc_lvol_verify 00:18:34.414 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:34.674 505ace69-6f54-45e7-87a6-b009f48ff035 00:18:34.674 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:34.934 59b72fab-5ba4-4985-9851-451c37a4faf4 00:18:34.934 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:35.193 /dev/nbd0 00:18:35.193 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:35.193 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:35.194 mke2fs 1.47.0 (5-Feb-2023) 00:18:35.194 Discarding device blocks: 0/4096 done 00:18:35.194 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:35.194 00:18:35.194 Allocating group tables: 0/1 done 00:18:35.194 Writing inode tables: 0/1 done 00:18:35.194 Creating journal (1024 blocks): done 00:18:35.194 Writing superblocks and filesystem accounting information: 0/1 done 00:18:35.194 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.194 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89966 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89966 ']' 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89966 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89966 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.454 killing process with pid 89966 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89966' 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89966 00:18:35.454 03:25:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89966 00:18:36.837 03:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:36.837 00:18:36.837 real 0m5.473s 00:18:36.837 user 0m7.314s 00:18:36.837 sys 0m1.369s 00:18:36.837 03:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.837 03:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:36.837 ************************************ 00:18:36.837 END TEST bdev_nbd 00:18:36.837 ************************************ 00:18:36.837 03:25:26 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:36.837 03:25:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:36.837 03:25:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:36.837 03:25:26 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:36.837 03:25:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:36.837 03:25:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.837 03:25:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:36.837 ************************************ 00:18:36.837 START TEST bdev_fio 00:18:36.837 ************************************ 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:36.837 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:36.837 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:36.838 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:36.838 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:37.098 ************************************ 00:18:37.098 START TEST bdev_fio_rw_verify 00:18:37.098 ************************************ 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:37.098 03:25:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.358 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:37.358 fio-3.35 00:18:37.358 Starting 1 thread 00:18:49.579 00:18:49.579 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90165: Wed Nov 20 03:25:37 2024 00:18:49.579 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(478MiB/10001msec) 00:18:49.579 slat (usec): min=17, max=546, avg=19.46, stdev= 3.29 00:18:49.579 clat (usec): min=11, max=1204, avg=131.26, stdev=48.75 00:18:49.579 lat (usec): min=31, max=1223, avg=150.72, stdev=49.69 00:18:49.579 clat percentiles (usec): 00:18:49.579 | 50.000th=[ 137], 99.000th=[ 212], 99.900th=[ 371], 99.990th=[ 873], 00:18:49.579 | 99.999th=[ 1188] 00:18:49.579 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(494MiB/9874msec); 0 zone resets 00:18:49.579 slat (usec): min=8, max=279, avg=16.70, stdev= 4.04 00:18:49.579 clat (usec): min=60, max=1013, avg=300.53, stdev=39.43 00:18:49.579 lat (usec): min=75, max=1206, avg=317.23, stdev=40.33 00:18:49.579 clat percentiles (usec): 00:18:49.579 | 50.000th=[ 306], 99.000th=[ 371], 99.900th=[ 562], 99.990th=[ 922], 00:18:49.579 | 99.999th=[ 971] 00:18:49.579 bw ( KiB/s): min=47728, max=54304, per=98.87%, avg=50687.58, stdev=1584.33, samples=19 00:18:49.579 iops : min=11932, max=13576, avg=12671.89, stdev=396.08, samples=19 00:18:49.579 lat (usec) : 20=0.01%, 50=0.01%, 100=16.50%, 250=38.64%, 500=44.76% 00:18:49.579 lat (usec) : 750=0.06%, 1000=0.03% 00:18:49.579 lat (msec) : 2=0.01% 00:18:49.579 cpu : usr=98.72%, sys=0.46%, ctx=79, majf=0, minf=10026 00:18:49.579 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.579 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.579 issued rwts: total=122379,126549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.579 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:49.579 00:18:49.579 Run status group 0 (all jobs): 00:18:49.579 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=478MiB (501MB), run=10001-10001msec 00:18:49.579 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=494MiB (518MB), run=9874-9874msec 00:18:49.579 ----------------------------------------------------- 00:18:49.579 Suppressions used: 00:18:49.579 count bytes template 00:18:49.579 1 7 /usr/src/fio/parse.c 00:18:49.579 381 36576 /usr/src/fio/iolog.c 00:18:49.579 1 8 libtcmalloc_minimal.so 00:18:49.579 1 904 libcrypto.so 00:18:49.579 ----------------------------------------------------- 00:18:49.579 00:18:49.838 00:18:49.838 real 0m12.703s 00:18:49.838 user 0m12.878s 00:18:49.838 sys 0m0.555s 00:18:49.838 03:25:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.838 03:25:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:49.838 ************************************ 00:18:49.838 END TEST bdev_fio_rw_verify 00:18:49.838 ************************************ 00:18:49.838 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7e3c372a-1eec-4d45-92d1-f9729fa17010"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7e3c372a-1eec-4d45-92d1-f9729fa17010",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7e3c372a-1eec-4d45-92d1-f9729fa17010",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "131cc8aa-4e7a-4bdc-a6d6-bcaa64d089c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "843b42e2-3eac-438b-ac60-d87071b9326f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f7389702-eb5b-45c3-bbb4-26e683bc9eb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:49.839 /home/vagrant/spdk_repo/spdk 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:49.839 00:18:49.839 real 0m13.005s 00:18:49.839 user 0m13.014s 00:18:49.839 sys 0m0.689s 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.839 03:25:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:49.839 ************************************ 00:18:49.839 END TEST bdev_fio 00:18:49.839 ************************************ 00:18:49.839 03:25:39 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:49.839 03:25:39 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:49.839 03:25:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:49.839 03:25:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.839 03:25:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:49.839 ************************************ 00:18:49.839 START TEST bdev_verify 00:18:49.839 ************************************ 00:18:49.839 03:25:39 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:50.098 [2024-11-20 03:25:39.530385] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:50.098 [2024-11-20 03:25:39.530491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90330 ] 00:18:50.098 [2024-11-20 03:25:39.704193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:50.358 [2024-11-20 03:25:39.811754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.358 [2024-11-20 03:25:39.811785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.928 Running I/O for 5 seconds... 00:18:52.806 11062.00 IOPS, 43.21 MiB/s [2024-11-20T03:25:43.381Z] 11072.00 IOPS, 43.25 MiB/s [2024-11-20T03:25:44.762Z] 11107.67 IOPS, 43.39 MiB/s [2024-11-20T03:25:45.332Z] 11073.00 IOPS, 43.25 MiB/s [2024-11-20T03:25:45.592Z] 11073.20 IOPS, 43.25 MiB/s 00:18:55.957 Latency(us) 00:18:55.957 [2024-11-20T03:25:45.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.957 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:55.957 Verification LBA range: start 0x0 length 0x2000 00:18:55.957 raid5f : 5.03 4409.17 17.22 0.00 0.00 43772.80 186.91 31365.70 00:18:55.957 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.957 Verification LBA range: start 0x2000 length 0x2000 00:18:55.957 raid5f : 5.03 6656.45 26.00 0.00 0.00 29010.36 123.42 20490.73 00:18:55.957 [2024-11-20T03:25:45.592Z] =================================================================================================================== 00:18:55.957 [2024-11-20T03:25:45.592Z] Total : 11065.62 43.23 0.00 0.00 34892.88 123.42 31365.70 00:18:57.339 00:18:57.339 real 0m7.215s 00:18:57.339 user 0m13.338s 00:18:57.339 sys 0m0.270s 00:18:57.339 03:25:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.339 03:25:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:57.339 ************************************ 00:18:57.339 END TEST bdev_verify 00:18:57.339 ************************************ 00:18:57.339 03:25:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:57.339 03:25:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:57.339 03:25:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.339 03:25:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:57.339 ************************************ 00:18:57.339 START TEST bdev_verify_big_io 00:18:57.339 ************************************ 00:18:57.339 03:25:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:57.339 [2024-11-20 03:25:46.818821] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:57.339 [2024-11-20 03:25:46.818951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90427 ] 00:18:57.599 [2024-11-20 03:25:46.994428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:57.599 [2024-11-20 03:25:47.104481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.599 [2024-11-20 03:25:47.104495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.169 Running I/O for 5 seconds... 00:19:00.487 633.00 IOPS, 39.56 MiB/s [2024-11-20T03:25:51.070Z] 761.00 IOPS, 47.56 MiB/s [2024-11-20T03:25:52.063Z] 803.67 IOPS, 50.23 MiB/s [2024-11-20T03:25:53.003Z] 808.75 IOPS, 50.55 MiB/s [2024-11-20T03:25:53.003Z] 812.60 IOPS, 50.79 MiB/s 00:19:03.368 Latency(us) 00:19:03.368 [2024-11-20T03:25:53.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.368 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:03.368 Verification LBA range: start 0x0 length 0x200 00:19:03.368 raid5f : 5.11 347.62 21.73 0.00 0.00 9135222.74 414.97 380967.35 00:19:03.368 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:03.368 Verification LBA range: start 0x200 length 0x200 00:19:03.368 raid5f : 5.25 459.07 28.69 0.00 0.00 6989106.77 169.92 302209.68 00:19:03.368 [2024-11-20T03:25:53.003Z] =================================================================================================================== 00:19:03.368 [2024-11-20T03:25:53.003Z] Total : 806.69 50.42 0.00 0.00 7900014.91 169.92 380967.35 00:19:04.750 00:19:04.750 real 0m7.462s 00:19:04.750 user 0m13.827s 00:19:04.750 sys 0m0.272s 00:19:04.750 03:25:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.750 03:25:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.750 ************************************ 00:19:04.750 END TEST bdev_verify_big_io 00:19:04.750 ************************************ 00:19:04.750 03:25:54 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:04.750 03:25:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:04.750 03:25:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.750 03:25:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.750 ************************************ 00:19:04.750 START TEST bdev_write_zeroes 00:19:04.750 ************************************ 00:19:04.750 03:25:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:04.750 [2024-11-20 03:25:54.354069] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:04.750 [2024-11-20 03:25:54.354178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90521 ] 00:19:05.010 [2024-11-20 03:25:54.527700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.010 [2024-11-20 03:25:54.633678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.581 Running I/O for 1 seconds... 00:19:06.521 30351.00 IOPS, 118.56 MiB/s 00:19:06.521 Latency(us) 00:19:06.521 [2024-11-20T03:25:56.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.521 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:06.521 raid5f : 1.01 30317.95 118.43 0.00 0.00 4209.72 1352.22 5780.90 00:19:06.521 [2024-11-20T03:25:56.156Z] =================================================================================================================== 00:19:06.521 [2024-11-20T03:25:56.156Z] Total : 30317.95 118.43 0.00 0.00 4209.72 1352.22 5780.90 00:19:07.904 00:19:07.904 real 0m3.172s 00:19:07.904 user 0m2.794s 00:19:07.904 sys 0m0.253s 00:19:07.904 03:25:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.904 03:25:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:07.904 ************************************ 00:19:07.904 END TEST bdev_write_zeroes 00:19:07.904 ************************************ 00:19:07.904 03:25:57 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:07.904 03:25:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:07.904 03:25:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.904 03:25:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.904 ************************************ 00:19:07.904 START TEST bdev_json_nonenclosed 00:19:07.904 ************************************ 00:19:07.904 03:25:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.163 [2024-11-20 03:25:57.614881] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:08.163 [2024-11-20 03:25:57.615011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90574 ] 00:19:08.423 [2024-11-20 03:25:57.797156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.423 [2024-11-20 03:25:57.906043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.423 [2024-11-20 03:25:57.906137] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:08.423 [2024-11-20 03:25:57.906169] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:08.423 [2024-11-20 03:25:57.906182] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:08.684 00:19:08.684 real 0m0.628s 00:19:08.684 user 0m0.389s 00:19:08.684 sys 0m0.134s 00:19:08.684 03:25:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.684 03:25:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:08.684 ************************************ 00:19:08.684 END TEST bdev_json_nonenclosed 00:19:08.684 ************************************ 00:19:08.684 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.684 03:25:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:08.684 03:25:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.684 03:25:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.684 ************************************ 00:19:08.684 START TEST bdev_json_nonarray 00:19:08.684 ************************************ 00:19:08.684 03:25:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.684 [2024-11-20 03:25:58.310467] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:08.684 [2024-11-20 03:25:58.310601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90605 ] 00:19:08.943 [2024-11-20 03:25:58.496955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.203 [2024-11-20 03:25:58.605816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.203 [2024-11-20 03:25:58.605926] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:09.203 [2024-11-20 03:25:58.605944] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:09.203 [2024-11-20 03:25:58.605962] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:09.464 00:19:09.464 real 0m0.627s 00:19:09.464 user 0m0.381s 00:19:09.464 sys 0m0.141s 00:19:09.464 03:25:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.464 03:25:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:09.464 ************************************ 00:19:09.464 END TEST bdev_json_nonarray 00:19:09.464 ************************************ 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:09.464 03:25:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:09.464 00:19:09.464 real 0m47.293s 00:19:09.464 user 1m3.711s 00:19:09.464 sys 0m4.849s 00:19:09.464 03:25:58 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.464 03:25:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.464 ************************************ 00:19:09.464 END TEST blockdev_raid5f 00:19:09.464 ************************************ 00:19:09.464 03:25:58 -- spdk/autotest.sh@194 -- # uname -s 00:19:09.464 03:25:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:09.464 03:25:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:09.464 03:25:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:09.464 03:25:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:09.464 03:25:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.464 03:25:58 -- common/autotest_common.sh@10 -- # set +x 00:19:09.464 03:25:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:09.464 03:25:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:09.464 03:25:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:09.464 03:25:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:09.464 03:25:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:09.464 03:25:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:09.464 03:25:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:09.464 03:25:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.464 03:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:09.464 03:25:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:09.464 03:25:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:09.464 03:25:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:09.464 03:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:12.006 INFO: APP EXITING 00:19:12.006 INFO: killing all VMs 00:19:12.006 INFO: killing vhost app 00:19:12.006 INFO: EXIT DONE 00:19:12.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:12.577 Waiting for block devices as requested 00:19:12.577 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.577 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.519 Cleaning 00:19:13.519 Removing: /var/run/dpdk/spdk0/config 00:19:13.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:13.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:13.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:13.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:13.519 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:13.519 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:13.519 Removing: /dev/shm/spdk_tgt_trace.pid56821 00:19:13.519 Removing: /var/run/dpdk/spdk0 00:19:13.519 Removing: /var/run/dpdk/spdk_pid56586 00:19:13.519 Removing: /var/run/dpdk/spdk_pid56821 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57056 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57160 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57216 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57344 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57362 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57572 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57688 00:19:13.519 Removing: /var/run/dpdk/spdk_pid57791 00:19:13.780 Removing: /var/run/dpdk/spdk_pid57917 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58021 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58060 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58097 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58173 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58295 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58737 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58814 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58883 00:19:13.780 Removing: /var/run/dpdk/spdk_pid58904 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59047 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59067 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59214 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59236 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59307 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59325 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59389 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59413 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59613 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59650 00:19:13.780 Removing: /var/run/dpdk/spdk_pid59739 00:19:13.780 Removing: /var/run/dpdk/spdk_pid61084 00:19:13.780 Removing: /var/run/dpdk/spdk_pid61296 00:19:13.780 Removing: /var/run/dpdk/spdk_pid61436 00:19:13.780 Removing: /var/run/dpdk/spdk_pid62079 00:19:13.780 Removing: /var/run/dpdk/spdk_pid62291 00:19:13.780 Removing: /var/run/dpdk/spdk_pid62432 00:19:13.780 Removing: /var/run/dpdk/spdk_pid63075 00:19:13.780 Removing: /var/run/dpdk/spdk_pid63400 00:19:13.780 Removing: /var/run/dpdk/spdk_pid63540 00:19:13.780 Removing: /var/run/dpdk/spdk_pid64925 00:19:13.780 Removing: /var/run/dpdk/spdk_pid65178 00:19:13.780 Removing: /var/run/dpdk/spdk_pid65318 00:19:13.780 Removing: /var/run/dpdk/spdk_pid66709 00:19:13.780 Removing: /var/run/dpdk/spdk_pid66962 00:19:13.780 Removing: /var/run/dpdk/spdk_pid67113 00:19:13.780 Removing: /var/run/dpdk/spdk_pid68498 00:19:13.780 Removing: /var/run/dpdk/spdk_pid68939 00:19:13.780 Removing: /var/run/dpdk/spdk_pid69090 00:19:13.780 Removing: /var/run/dpdk/spdk_pid70570 00:19:13.780 Removing: /var/run/dpdk/spdk_pid70829 00:19:13.780 Removing: /var/run/dpdk/spdk_pid70981 00:19:13.780 Removing: /var/run/dpdk/spdk_pid72463 00:19:13.780 Removing: /var/run/dpdk/spdk_pid72734 00:19:13.780 Removing: /var/run/dpdk/spdk_pid72882 00:19:13.780 Removing: /var/run/dpdk/spdk_pid74362 00:19:13.780 Removing: /var/run/dpdk/spdk_pid74849 00:19:13.780 Removing: /var/run/dpdk/spdk_pid75000 00:19:13.780 Removing: /var/run/dpdk/spdk_pid75145 00:19:13.780 Removing: /var/run/dpdk/spdk_pid75564 00:19:13.780 Removing: /var/run/dpdk/spdk_pid76292 00:19:13.780 Removing: /var/run/dpdk/spdk_pid76666 00:19:13.780 Removing: /var/run/dpdk/spdk_pid77374 00:19:13.780 Removing: /var/run/dpdk/spdk_pid77819 00:19:13.780 Removing: /var/run/dpdk/spdk_pid78574 00:19:13.780 Removing: /var/run/dpdk/spdk_pid78977 00:19:13.780 Removing: /var/run/dpdk/spdk_pid80941 00:19:14.041 Removing: /var/run/dpdk/spdk_pid81385 00:19:14.041 Removing: /var/run/dpdk/spdk_pid81824 00:19:14.041 Removing: /var/run/dpdk/spdk_pid83919 00:19:14.041 Removing: /var/run/dpdk/spdk_pid84401 00:19:14.041 Removing: /var/run/dpdk/spdk_pid84921 00:19:14.041 Removing: /var/run/dpdk/spdk_pid85991 00:19:14.041 Removing: /var/run/dpdk/spdk_pid86318 00:19:14.041 Removing: /var/run/dpdk/spdk_pid87256 00:19:14.041 Removing: /var/run/dpdk/spdk_pid87579 00:19:14.041 Removing: /var/run/dpdk/spdk_pid88518 00:19:14.041 Removing: /var/run/dpdk/spdk_pid88841 00:19:14.041 Removing: /var/run/dpdk/spdk_pid89517 00:19:14.041 Removing: /var/run/dpdk/spdk_pid89798 00:19:14.041 Removing: /var/run/dpdk/spdk_pid89864 00:19:14.041 Removing: /var/run/dpdk/spdk_pid89906 00:19:14.041 Removing: /var/run/dpdk/spdk_pid90150 00:19:14.041 Removing: /var/run/dpdk/spdk_pid90330 00:19:14.041 Removing: /var/run/dpdk/spdk_pid90427 00:19:14.041 Removing: /var/run/dpdk/spdk_pid90521 00:19:14.041 Removing: /var/run/dpdk/spdk_pid90574 00:19:14.041 Removing: /var/run/dpdk/spdk_pid90605 00:19:14.041 Clean 00:19:14.041 03:26:03 -- common/autotest_common.sh@1453 -- # return 0 00:19:14.041 03:26:03 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:14.041 03:26:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.041 03:26:03 -- common/autotest_common.sh@10 -- # set +x 00:19:14.041 03:26:03 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:14.041 03:26:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.041 03:26:03 -- common/autotest_common.sh@10 -- # set +x 00:19:14.301 03:26:03 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:14.301 03:26:03 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:14.301 03:26:03 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:14.301 03:26:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:14.301 03:26:03 -- spdk/autotest.sh@398 -- # hostname 00:19:14.301 03:26:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:14.301 geninfo: WARNING: invalid characters removed from testname! 00:19:36.254 03:26:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:37.635 03:26:27 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:40.174 03:26:29 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:41.555 03:26:31 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:43.465 03:26:33 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:45.431 03:26:35 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:47.345 03:26:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:47.345 03:26:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:47.345 03:26:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:47.345 03:26:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:47.345 03:26:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:47.345 03:26:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:47.605 + [[ -n 5434 ]] 00:19:47.605 + sudo kill 5434 00:19:47.616 [Pipeline] } 00:19:47.633 [Pipeline] // timeout 00:19:47.639 [Pipeline] } 00:19:47.656 [Pipeline] // stage 00:19:47.662 [Pipeline] } 00:19:47.679 [Pipeline] // catchError 00:19:47.689 [Pipeline] stage 00:19:47.692 [Pipeline] { (Stop VM) 00:19:47.706 [Pipeline] sh 00:19:47.991 + vagrant halt 00:19:50.534 ==> default: Halting domain... 00:19:58.686 [Pipeline] sh 00:19:58.975 + vagrant destroy -f 00:20:01.517 ==> default: Removing domain... 00:20:01.531 [Pipeline] sh 00:20:01.817 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:01.827 [Pipeline] } 00:20:01.842 [Pipeline] // stage 00:20:01.848 [Pipeline] } 00:20:01.863 [Pipeline] // dir 00:20:01.870 [Pipeline] } 00:20:01.884 [Pipeline] // wrap 00:20:01.890 [Pipeline] } 00:20:01.905 [Pipeline] // catchError 00:20:01.915 [Pipeline] stage 00:20:01.917 [Pipeline] { (Epilogue) 00:20:01.929 [Pipeline] sh 00:20:02.215 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:06.428 [Pipeline] catchError 00:20:06.430 [Pipeline] { 00:20:06.445 [Pipeline] sh 00:20:06.732 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:06.732 Artifacts sizes are good 00:20:06.742 [Pipeline] } 00:20:06.758 [Pipeline] // catchError 00:20:06.772 [Pipeline] archiveArtifacts 00:20:06.781 Archiving artifacts 00:20:06.926 [Pipeline] cleanWs 00:20:06.944 [WS-CLEANUP] Deleting project workspace... 00:20:06.944 [WS-CLEANUP] Deferred wipeout is used... 00:20:06.972 [WS-CLEANUP] done 00:20:06.974 [Pipeline] } 00:20:06.992 [Pipeline] // stage 00:20:06.999 [Pipeline] } 00:20:07.015 [Pipeline] // node 00:20:07.020 [Pipeline] End of Pipeline 00:20:07.076 Finished: SUCCESS